Sorry tormentos, PS5 lost the GPU wars

  • 418 results
  • 1
  • ...
  • 3
  • 4
  • 5
  • 6
  • 7
  • ...
  • 9
Avatar image for Random_Matt
Random_Matt

4965

Forum Posts

0

Wiki Points

0

Followers

Reviews: 20

User Lists: 0

#201 Random_Matt
Member since 2013 • 4965 Posts

Console gamers carping on about power is cringe worthy, unlikely they know anything other than buzz words and numbers. Do some research and build a rig.

Avatar image for Pedro
Pedro

39086

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#202 Pedro
Member since 2002 • 39086 Posts

@Random_Matt said:

Console gamers carping on about power is cringe worthy, unlikely they know anything other than buzz words and numbers. Do some research and build a rig.

Your assumption seems to rely heavily on idea that console gamers are ignorant about PC hardware. Which is not accurate.

Avatar image for hrt_rulz01
hrt_rulz01

20024

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#203 hrt_rulz01
Member since 2006 • 20024 Posts

@BlackShirt20 said:

@Sagemode87: Dude. Stop.

The PS5 is literally 9.2TF machine overclocked to hell and back. Xbox Series X is a lot more powerful. 20%-30% depending on what is going on in game.

That’s a lot.

This... what isn't been explained properly in the pro-Sony media is that the 10.28 TFLOP number is the maximum performance PS5 can achieve. The "real" performance will probably be in the 9 TFLOP area, and sometimes boost to over 10. Whereas the XsX can achieve a sustained 12 TFLOP number.

Tom Warren from The Verge

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#204  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia: So what you are saying is XSX is running on a PC Windows 10 installation.... with game code path compiled for the PC Version, rather than running "Ultra" settings on the separate Xbox code path. Does XSX APU drivers exist on Windows 10 PC?....

Red herring.

Xbox Series X GPU: Similar rasterization performance to RTX 2080 -- Digital Foundry. This is running Gears 5's built-in benchmark at PC Ultra settings. Two weeks raw port without RDNA 2 optimizations.

https://youtu.be/oNZibJazWTo?t=762

Loading Video...
Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#205 BlackShirt20
Member since 2005 • 1642 Posts

@ronvalencia: It really incredible to see just how powerful the new Xbox Series X is. The machine is the definition of a beast.

Avatar image for CrumUnderMe
CrumUnderMe

361

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#207 CrumUnderMe
Member since 2007 • 361 Posts

@BlackShirt20 said:

@ronvalencia: It really incredible to see just how powerful the new Xbox Series X is. The machine is the definition of a beast.

word it is the spartan446 of consoles

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#208  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia: so basically it ran on Ultra settings using the Xbox code base gotcha, not Windows 10 with Gameworks Gotcha.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#209 Uruz7laevatein
Member since 2009 • 133 Posts

@hrt_rulz01: The difference between "boost" clock and "base" clock as described by Cerny on PS5 was a 2% clock difference but a 10% power consumption difference. That actually puts the base clock at around (2.23 * 0.98) = 2.18 Ghz or (10.3 * 0.98) = 10.1 TFLOPs which is such a minor nitpick it's not even worth mentioning. The boost clock is just a minor bonus.

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#210 tormentos
Member since 2003 • 30900 Posts

@BlackShirt20 said:

@tormentos: actually I’m looking pretty good. As I have said from day one. Before spec leaks came out. All games will look and play better on Xbox Series X.

You can claim PS5 is running full RDNA 2.0 but with no variable shading mention (as well as other advanced tech not mentioned by Sony), it is likely not full RDNA 2.0 as DF has stated.

So again.......

Games will look and play better on Xbox Series X. Is my statement opinion or fact?

Yeah i remember when you people assumed sony didn't have HBRT even after sony confirmed it,simply because it didn't mention it specifictly,even ronvalencia hop into that train and was wrong as well.

Look better sure play better that is totally debatable.

@ronvalencia said:

RX 5700 XT RDNA v1 wouldn't be delivering XSX's RTX 2080 Super level Forza Horizon 4 and Gears 5 results.

PS5's 36 CU RDNA v2 is NOT even RDNA v1 IPC, hence you can't use RX 5700XT vs RX 5700 examples.

RDNA 2 is effectively Turing IPC like solution. NVIDIA will be on its next CUDA architecture with Ampere later this year.

MS always wanted NVIDIA xx80 class GPU without paying NVIDIA's asking price. RDNA's Wave32 switch is the quest to be CUDA warp32 clone.

AMD lost the compute length format war, hence adopting wave32/warp32 CUDA length. AMD is pretty good at copying and with a lower asking price business model.

Unreal Engine 4 hardware optimization is a requirement. Sony jumps in for the same RDNA 2 ride.

-----------

On the server market, wave32 support will be needed for AMD's CUDA warp32 translation tool.

This is what i don't like about your bullshit and MS ass kissing.

MY example is better because it uses RDNA which and show the gap between 2 GPU given the bandwidth been the same 1.8TF of difference barely produce a gap,5,10 or 15FPS depending on the game.

While RDNA 1 doesn't have all the features or power as RDNA2 is the straight comparison the point,both are the same is the point,and RDNA1 represent better RDNA2 than Turin ever will period,Turin has much better bandwidth management than RDNA2 it has always be that way with Nvidia GPU.

I hope you don't start your usual lobbying of the xbox trying to make things bigger than the are,hell the SSD on PS5 has a bigger gap in speed than the gap in power this 2 machines have.

The xbox series X will be on top of the PS5 and that isn't debatable,what it is,is by how much considering the gap is just 17% more GPU power.

Avatar image for xhawk27
xhawk27

11455

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#211 xhawk27
Member since 2010 • 11455 Posts

The PS5 will be 11.6 Tflops. Hahahahahhaha

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#212  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia: so basically it ran on Ultra settings using the Xbox code base gotcha, not Windows 10 with Gameworks Gotcha.

Red herring. For Gears 5, 12 TFLOPS RDNA 2 delivered like RTX 2080.

https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-founders-edition/37.html

RTX 2080 FE has 1897 Mhz average clock speed which yields 11.655 TFLOPS.

https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/

Xbox Series X's primary API is "DIrectX12 Ultimate" which is unified with PC's "DIrectX12 Ultimate" API which includes Mesh shaders, Variable Rate Shading, Sampler Feedback and DirectX Raytracing (DXR) Tier 1.1.

  • https://community.amd.com/community/gaming/blog/2020/03/19/powering-next-generation-gaming-visuals-with-amd-rdna-2-and-directx-12-ultimate
  • https://www.nvidia.com/en-us/geforce/news/geforce-rtx-ready-for-directx-12-ultimate

Both RDNA and Turing supports Shader Model 6's wave32 compute model.

Avatar image for davillain-
DaVillain-

41926

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#213 DaVillain-  Moderator
Member since 2014 • 41926 Posts

This post isn't directed towards you tormentos, but I want to mention this to everyone here.

Regarding the GPU between PS5 & XSX. Both next-gen consoles are a beast of a console I have seen so far, while XSX may have an edge, PS5 is still a formidable console. It all comes down to "Real-World Performance" and with that beast of an SSD PS5 has. It may or may not. All I'm saying, don't underestimate PS5.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#214 ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@BlackShirt20 said:

@tormentos: actually I’m looking pretty good. As I have said from day one. Before spec leaks came out. All games will look and play better on Xbox Series X.

You can claim PS5 is running full RDNA 2.0 but with no variable shading mention (as well as other advanced tech not mentioned by Sony), it is likely not full RDNA 2.0 as DF has stated.

So again.......

Games will look and play better on Xbox Series X. Is my statement opinion or fact?

Yeah i remember when you people assumed sony didn't have HBRT even after sony confirmed it,simply because it didn't mention it specifictly,even ronvalencia hop into that train and was wrong as well.

Look better sure play better that is totally debatable.

@ronvalencia said:

RX 5700 XT RDNA v1 wouldn't be delivering XSX's RTX 2080 Super level Forza Horizon 4 and Gears 5 results.

PS5's 36 CU RDNA v2 is NOT even RDNA v1 IPC, hence you can't use RX 5700XT vs RX 5700 examples.

RDNA 2 is effectively Turing IPC like solution. NVIDIA will be on its next CUDA architecture with Ampere later this year.

MS always wanted NVIDIA xx80 class GPU without paying NVIDIA's asking price. RDNA's Wave32 switch is the quest to be CUDA warp32 clone.

AMD lost the compute length format war, hence adopting wave32/warp32 CUDA length. AMD is pretty good at copying and with a lower asking price business model.

Unreal Engine 4 hardware optimization is a requirement. Sony jumps in for the same RDNA 2 ride.

-----------

On the server market, wave32 support will be needed for AMD's CUDA warp32 translation tool.

This is what i don't like about your bullshit and MS ass kissing.

MY example is better because it uses RDNA which and show the gap between 2 GPU given the bandwidth been the same 1.8TF of difference barely produce a gap,5,10 or 15FPS depending on the game.

While RDNA 1 doesn't have all the features or power as RDNA2 is the straight comparison the point,both are the same is the point,and RDNA1 represent better RDNA2 than Turin ever will period,Turin has much better bandwidth management than RDNA2 it has always be that way with Nvidia GPU.

I hope you don't start your usual lobbying of the xbox trying to make things bigger than the are,hell the SSD on PS5 has a bigger gap in speed than the gap in power this 2 machines have.

The xbox series X will be on top of the PS5 and that isn't debatable,what it is,is by how much considering the gap is just 17% more GPU power.

Reminder, AMD sells more higher-margin products to MS and Windows PC market when compared to Sony i.e. Azure cloud (e.g. Epyc Rome), Surface Laptop 2, corporate PCs (e.g. Ryzen 9 based workstations), RX NAVI OEM GPUs and 'etc'.

When AMD's console sales faded late last year and into this year (due to Osborne effects), AMD PC CPU and GPU OEM sales still yield revenue growth for AMD. The strength Ryzen PC CPU sales have partly restored K7/K8 era AMD, which enables AMD's CPU division to stand on its own.

DXR is MS and NVIDIA's pushed initiative which AMD follows.

VRS is MS pushed initiative that NVidia and AMD follows.

FACTS from AMD, https://www.anandtech.com/show/14579/all-ryzen-qa-with-amd-ceo-dr-lisa-su

David Wang, AMD:We started RDNA before the Sony engagement. I think RDNA is revolutionary, and it is very flexible in terms of being able to be customized for different types of workloads.

RDNA was designed before the Sony engagement, hence debunking any NAVI being exclusive to PS5 e.g.

https://segmentnext.com/2018/06/13/amd-navi-sony-playstation-5/

https://www.pcgamesn.com/amd/navi-sony-ps5-hardware-exclusivity

For Shader Model 6's wave32 support, NAVI's Wave32 is clearly CUDA warp32 influenced which dominates the Windows dGPU platform.

-----------

Your GPU argument doesn't factor in memory bandwidth increase with TFLOPS increase.

Avatar image for CrumUnderMe
CrumUnderMe

361

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#215 CrumUnderMe
Member since 2007 • 361 Posts

@davillain- said:

This post isn't directed towards you tormentos, but I want to mention this to everyone here.

Regarding the GPU between PS5 & XSX. Both next-gen consoles are a beast of a console I have seen so far, while XSX may have an edge, PS5 is still a formidable console. It all comes down to "Real-World Performance" and with that beast of an SSD PS5 has. It may or may not. All I'm saying, don't underestimate PS5.

but this reply is directed at you:

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#216 tormentos
Member since 2003 • 30900 Posts

@BlackShirt20 said:

Xbox has 20% more raw power.

Xbox has a faster processor.

Xbox also has faster ram.

Do any of these things matter? Nope. Only thing Sony Pony’s want to brag about now is the fact their games will load a couple seconds faster. Congratulations Sony. You did it.

It has 17% more power.

100mhz lol.

It has 10GB of faster ram and 6GB of SLOWER ram,and the xbox series X will use for GPU part of that slower ram as well,not to mention using split memory setting again which is not very fun for developers,lets not forget the xbox one and its dual memory setup.

Actually the PS5 SDD is more than TWICE as fast as the xbox series X one,so while you hype 17% more power,the PS5 ssd if more than 100% faster.

But again you are a fool who think the xbox fan doesn't cool the damn system because it doesn't blow cold air into the GPU.lol

You are basically on Blackace level of stupidity now.

@xhawk27 said:

The Xbox Series X is the most powerful Console next gen. Almost 2 Tflop difference and 16 more Compute Units. 320 bus vs 256 on the PS5.

Tomato You lose. LOL

If you survived the shitty xbox one and its 720p games which YOU defended,i don't see how it could be worse for me,after all you deny a 40% advantage and downplay it as nothing,and not try to act like 17% more GPU mean anything.🤷‍♂️

At least i am not paying $100 more for 40% less and 720p,i am getting 4k with very little difference compare to the xbox.

@gifford38 said:
@rmpumper said:

Not even full BC for PS4 games. What a fucking scam.

xbxo one did not support all 360 games neither. instead of making exclusive all gen all there energy went to bc.

The xbox one neither lemmings are just to stupid to understand it.

Their job is attack sony no matter what,it doesn't matter if MS did the same shit as well,maybe some are to young to remember the backlash about how the 360 supported barbie horse but not several great xbox games,or how the BC was tied to the HDD if you had a core system you were fu**.

The fun part was the PS3 was compatible with almost all PS2 and PS1 games,and this people screamed for years the PS3 had no games..

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#217  Edited By ronvalencia
Member since 2008 • 29612 Posts
@xhawk27 said:

The PS5 will be 11.6 Tflops. Hahahahahhaha

PS5 has ~11.2 TFLOPS with CPU, GPU (not including RT cores with 11 TFLOPS equivalent) and DSP (AMD's CU based stream processor without raster hardware).

It looks like AMD's Polaris CU TruAudio IP was cycled for PS5's DSP. Sony paid for AMD CU based DSP R&D.

Technically, PS5 has active RDNA v2 36 CU for GPU and at least 1 CU for DSP.

I would like to see AMD entering the PC DSP soundcard market.

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#218 tormentos
Member since 2003 • 30900 Posts

@ronvalencia said:

The difference is like stock RTX 2070 Super and MSI RTX 2080 Super Gaming X Trio OC (12.15 TFLOPS average, 496 GB/s BW) level.

PS5's 448 GB/s is shared between CPU and GPU which will require extra programming discipline.

XSX has 560 GB/s memory bandwidth.

That in bold will be true when you PRUVE to all of us that AMD manufacture The 2070 and 2080 super and that the PS5 and series X use those GPU.

Yes and is 448GB/s on ALL its memory.

The xbox use 10GB fast memory for video and part of those other 6GB also used by the video card are slower than the PS5 bandwidth,not to mention having 2 different bandwidth create more work for developers.

""In terms of how the memory is allocated, games get a total of 13.5GB in total, which encompasses all 10GB of GPU optimal memory and 3.5GB of standard memory.""

3.5GB of video memory operates at 336gb/s 112GB slower than the PS5,so is not a complete win all the way for the series X here.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#219 BlackShirt20
Member since 2005 • 1642 Posts

@tormentos: “ The xbox series X will be on top of the PS5 and that isn't debatable.”

Did you just admit the new Xbox is a better machine?

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#220 tormentos
Member since 2003 • 30900 Posts

@ronvalencia said:

Reminder, AMD sells more higher-margin products to MS and Windows PC market when compared to Sony i.e. Azure cloud (e.g. Epyc Rome), Surface Laptop 2, corporate PCs (e.g. Ryzen 9 based workstations), RX NAVI OEM GPUs and 'etc'.

When AMD's console sales faded late last year and into this year (due to Osborne effects), AMD PC CPU and GPU OEM sales still yield revenue growth for AMD. The strength Ryzen PC CPU sales have partly restored K7/K8 era AMD, which enables AMD's CPU division to stand on its own.

DXR is MS and NVIDIA's pushed initiative which AMD follows.

VRS is MS pushed initiative that NVidia and AMD follows.

FACTS from AMD, https://www.anandtech.com/show/14579/all-ryzen-qa-with-amd-ceo-dr-lisa-su

David Wang, AMD:We started RDNA before the Sony engagement. I think RDNA is revolutionary, and it is very flexible in terms of being able to be customized for different types of workloads.

RDNA was designed before the Sony engagement, hence debunking any NAVI being exclusive to PS5 e.g.

https://segmentnext.com/2018/06/13/amd-navi-sony-playstation-5/

https://www.pcgamesn.com/amd/navi-sony-ps5-hardware-exclusivity

For Shader Model 6's wave32 support, NAVI's Wave32 is clearly CUDA warp32 influenced which dominates the Windows dGPU platform.

-----------

Your GPU argument doesn't factor in memory bandwidth increase with TFLOPS increase.

Nothing on this freaking post adress anything i say.

Man you are the master of arguing bullshit no one is arguing.

Just like yours doesn't show how Turing has 10GB of 560GB/s memory and 3.5GB of 336GB/s memory.

So again you only look at what you freaking like and ignore everything else when you show me how the 2070 and 2080 super have mix memory bandwidth you will begin to make a point.

@xhawk27 said:

The PS5 will be 11.6 Tflops. Hahahahahhaha

No is 10.28 just 17% apart power wise but again if you didn't care about 720p why would i care about this i am getting 4k with a few less frames you got 720p for $100 more and less frames to.

🤣

Avatar image for Martin_G_N
Martin_G_N

1979

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#221 Martin_G_N
Member since 2006 • 1979 Posts

@ronvalencia: that sound processor sounds exciting, about time they improved the audio in games. It's funny that the PS4 uses only one jaguar core for audio, while the PS3 could have up to 512 audio channels on the Cell BE. No wonder the audio in most games are worse than previous gen.

I remember 7.1 surround in KZ2 was pretty awesome, not much has happened with audio since. So this new audio processor should add new possibilities.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#222  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@tormentos: XSX Having a segmented memory pool also means the more memory overhead (in terms of wasted bandwidth and amount wasted on copying duplicate/redundant data), it also means that the XSX GPU can only read/write from either one or the other at once but not both at the same time which means graphics/game data is limited up to 10GB at best (if not lower at about 6.5 GB at worse case scenario for Graphics). Of course this all indicates XSX effective memory bandwidth is lower (the same issues PS3 once faced vs the 360 except the roles are reversed). Extra programming discipline would be more of a challenge on XSX if anything as opposed to the UMA on 360 or the hUMA on PS4/XB1X/PS5. The only reason from an engineering perspective of why MS went with such a design was to hit the 16GB GDDR6 target on a 320-bit bus, as the console was hitting the acceptable BOM limit (whichever it it is) set by M$, and as a consequence of more CUs means a wider memory-bus as a consequence, otherwise it would of been a full on 320 to 384-bit unified memory bus if it wasn't prohibitively expensive in budget.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#223 BlackShirt20
Member since 2005 • 1642 Posts

@tormentos: Just out of curiosity, is that PS5 GPU performance locked or could it’s performance vary from say.....10.2TF to say 9.2TF?

Thanks in advance.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#224  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@Martin_G_N: iirc Accurate and High Fidelity 3D Raytraced Audio rendered in real-time by that of the PS5 would be very expensive in terms of computing power (~20% of GPU compute or even greater) if rendering in software/emulation wise , that a dedicated ASIC/DSP like the Tempest Engine was created offload such task from the CPU/GPU with alot less GFLOPs/TFLOPs needed from a couple of regular RDNA2 CU/Zen2 cores . Modifying a GPU CU into a dedicated sound ASIC requires alot of engineering work (it's almost like creating a new GPU architecture but on a smaller scale) in courtesy of AMD+Sony engineers, but most from of it from AMD of course, it's pretty impressive that they went out of their way in budget at the expense of marginally higher GPU CU count to do so.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#225  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@BlackShirt20: According to Cerny the PS5 will run at 10.3 TFLOPs "boost" speed most if not all the time unless required to throttle down to 10.1 TFLOPs "base" speed for a 10% reduction in power at 2% clockspeed difference (power consumption is quadratic not linear as P=I*V^2). The "boost" is very misleading.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#226  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:

Nothing on this freaking post adress anything i say.

Man you are the master of arguing bullshit no one is arguing.

Just like yours doesn't show how Turing has 10GB of 560GB/s memory and 3.5GB of 336GB/s memory.

So again you only look at what you freaking like and ignore everything else when you show me how the 2070 and 2080 super have mix memory bandwidth you will begin to make a point.

1. I have addressed "This is what i don't like about your bullshit and MS ass kissing." Hint: real-world semiconductor market conditions don't care about your feels.

MS has the market power over desktop PCs.

2. Without using XSX's optimizations (e.g. AMD Fusion links) for 2 weeks raw Gears 5 port with "DirectX12 Ultimate "API, XSX's 560 GB/s memory bandwidth has to be shared for CPU and GPU, hence XSX would act as a PC with Ryzen 7 3700X CPU and RTX 2080 GPU.

XSX's "DirectX12 Ultimate" API is unified with PC's "DirectX12 Ultimate" API (comes with Windows 10 2004) and it's slightly different from XBO's "DirectX12.X".

https://devblogs.microsoft.com/directx/directx-12-ultimate-getting-started-guide/

Next retail MS "Windows 10 version 2004" update comes with "DirectX12 Ultimate" API.

XSX can brute force like a gaming PC with comparable spec'ed CPU and GPU configuration.

3. Sony is less transparent with their PS5 development, hence less news from them.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#227  Edited By ronvalencia
Member since 2008 • 29612 Posts
@Uruz7laevatein said:

@tormentos: XSX Having a segmented memory pool also means the more memory overhead (in terms of wasted bandwidth and amount wasted on copying duplicate/redundant data), it also means that the XSX GPU can only read/write from either one or the other at once but not both at the same time which means graphics/game data is limited up to 10GB at best (if not lower at about 6.5 GB at worse case scenario for Graphics). Of course this all indicates XSX effective memory bandwidth is lower (the same issues PS3 once faced vs the 360 except the roles are reversed). Extra programming discipline would be more of a challenge on XSX if anything as opposed to the UMA on 360 or the hUMA on PS4/XB1X/PS5. The only reason from an engineering perspective of why MS went with such a design was to hit the 16GB GDDR6 target on a 320-bit bus, as the console was hitting the acceptable BOM limit (whichever it it is) set by M$, and as a consequence of more CUs means a wider memory-bus as a consequence, otherwise it would of been a full on 320 to 384-bit unified memory bus if it wasn't prohibitively expensive in budget.

Nope, 336 GB/s 6 GB address space would be allocated CPU and audio workloads in while 560GB/s 10 GB address space is dedicated to the GPU.

336GB/s 6 GB address space is reduced by 2.5GB for the OS-related services, hence 336 GB/s 3.5 GB address space is available for the game programmer.

This is not PS3 XDR (for CELL) and GDDR3 (for RSX) split.

X360 has unified 512 MB GDDR3 with non-unified 10 MB EDRAM.

XBO has unified 8GB DDR3 with non-unified 8MB (for DSP) and 32 MB eSRAM (for GPU).

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#228  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia: EDRAM in the 360 was just LLC for the GPU but still unified, same for the most part on XBO (albeit 32MB eSRAM being a scratchpad)

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#229  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia: EDRAM in the 360 was just LLC for the GPU but still unified, same for the most part on XBO (albeit 32MB eSRAM being a scratchpad)

For X360, CPU doesn't have access to GPU's 10 MB EDRAM framebuffer.

For XBO, CPU doesn't have access to GPU's 32 MB eSRAM framebuffer.

Certain raster workloads don't need the CPU to be involved.

For X1X, XBO's 32 MB ESRAM is assigned to a certain memory address range as a virtual ESRAM space area. All GDDR5 memory pool are addressable by the CPU.

For XSX

The above is asymmetric unified** memory architecture i.e.

6 GB address range has 336 GB/s

10 GB address range has 560 GB/s

**CPU addressable.

CPU shouldn't be involved with large scale memory manipulation when the GPU has the large scale DMA engines and TMU load-store arrays.

Avatar image for i_p_daily
I_P_Daily

16127

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#230 I_P_Daily
Member since 2015 • 16127 Posts

@Juub1990 said:

@i_p_daily: lmao looks like Ron found his soulmate.

Do you think we all will get an invite to the wedding :P

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#231 BlackShirt20
Member since 2005 • 1642 Posts

@Uruz7laevatein: I am bettering it drops well under 10 often. Especially with as high as the GPU is being clocked.

Regardless, being 20% more powerful with RDNA 2.0 technology is the equivalent of 2 standard PS4’s or a PS4 Pro. Let that sink in.

https://www.windowscentral.com/xbox-series-x-way-more-powerful-ps5-heres-how-much-more

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#232  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@BlackShirt20:

The clock-speed difference explicitly stated by Cerny had to with power consumption NOT temperature (meaning the cooling solution is more than adequate from an engineering standpoint), and this was thought of well above a year in advance. RDNA2 is also stated to be 50% more efficient in terms of perf/watt compared to RDNA1.

There's a thing called proportions in terms of shaders relative to the rest of the cpu/gpu/ram, and a things called diminishing returns, when you just scale TFLOPs without the rest of hardware then no it's not so simple. For instance in real world perf the difference between PS4 and XBO was a difference between 50-100% performance (as opposed to 20-25% real world performance compared to X1X to Pro despite X1X 42% higher TFLOP count) in both exclusive (Uncharted, and Horizon) and multi-plat games (Tomb Raider Definitive Edition) deficit despite the 30% higher TFLOP number because the rest of the hardware/software contributes to overall performance increase.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#233 Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia: CPU's in general spend 90-95% of their cycles accessing L1 and L2 caches to begin with making this kinda moot and of if anything a asymmetric memory bus isn't really an advantage but still is a cost cutting measure. The only thing CPU does it run narrow branchy, nested algorithms, or guide/compile data into the GPU to be processed.

Avatar image for Juub1990
Juub1990

9646

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#234  Edited By Juub1990
Member since 2013 • 9646 Posts

@Uruz7laevatein said:

@BlackShirt20: There's a thing called proportions, and diminishing returns, when you just scale TFLOPs without the rest of hardware then no it's not so simple. For instance in real world perf the difference between PS4 and XBO was a difference between 50-100% performance in both exclusive and multi-plat games (Tomb Raider Definitive Edition) deficit despite the 30% higher TFLOP number because the rest of the hardware/software contributes to overall performance ncrease

That's complete and utter bullshit.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#235  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@BlackShirt20: There's a thing called proportions, and diminishing returns, when you just scale TFLOPs without the rest of hardware then no it's not so simple. For instance in real world perf the difference between PS4 and XBO was a difference between 50-100% performance in both exclusive and multi-plat games (Tomb Raider Definitive Edition) deficit despite the 30% higher TFLOP number because the rest of the hardware/software contributes to overall performance ncrease

RDR2 on X1X vs PS4 Pro, ~8M pixels vs ~4M pixels. TFLOPS difference is 42.8 percent.

X1X's 42.8 percent TFLOPS increase was scaled by 45% memory bandwidth increase. Both factors can influence the amount of render time for each frame.

Avatar image for phbz
phbz

5617

Forum Posts

0

Wiki Points

0

Followers

Reviews: 54

User Lists: 0

#236 phbz
Member since 2009 • 5617 Posts

I'm mesmerized by how confident both this guys feel.

Avatar image for ellos
ellos

2260

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#237 ellos
Member since 2015 • 2260 Posts

@phbz said:

I'm mesmerized by how confident both this guys feel.

If you reverse the situation the arguments will be the same, just different roles. That is the beauty of it all.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#238  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@ronvalencia: Only in TFLOPs and Memory (albeit diminished due to RPM optimizations), but Geometry and ROPs (which was actually worse) didn't scale proportionally, not to mention the Jaguar CPU (and greater software overhead, which really diminishes any incentive of really going beyond 4.2 TFLOPs) was a bit of a bottleneck

Avatar image for Pedro
Pedro

39086

Forum Posts

0

Wiki Points

0

Followers

Reviews: 66

User Lists: 0

#239 Pedro
Member since 2002 • 39086 Posts

@Uruz7laevatein said: For instance in real world perf the difference between PS4 and XBO was a difference between 50-100% performance (as opposed to 20-25% real world performance compared to X1X to Pro despite X1X 42% higher TFLOP count) in both exclusive (Uncharted, and Horizon) and multi-plat games (Tomb Raider Definitive Edition) deficit despite the 30% higher TFLOP number because the rest of the hardware/software contributes to overall performance increase.
Its amazing how wrong you are.
Its amazing how wrong you are.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#240 BlackShirt20
Member since 2005 • 1642 Posts

@Uruz7laevatein:

Here is what you completely ignore. Tormentos is guilty of this as well.

You look at the PS4 Pro with 4.2 TF and Xbox One X at 6.1TF and think ok, that a decent gap. And since we are looking at 10.3TF (best case scenario), verse 12.2TF and you think ok..... the gap is even smaller.

It’s not. These new GPU’s are far more efficient this generation. The power difference is about 2 standard PS4’s or even a PS4 Pro because of the RDNA 2.0 technology.

When MS said the new Xbox would be double that of Xbox One X, many PC enthusiast said if the GPU is RDNA 2.0 the TF number would be around 10.1 because that is the Efficiency you get in these new chips over the old architecture. So the Xbox Series X is well over double that of the Xbox One X. And we are seeing that in Gears 5 demo alone after just 2 weeks running on the hardware.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#241  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@Juub1990:

Except it's not

for instance in Tomb Raider Definitive Edition despite both being 1080P, the PS4 was consistently running at double the FPS at higher Graphics settings.

Assassins Creed Unity where the difference was limited to 50% on PS4 over XBO despite the developers trying to provide "parity" (PS4 being deliberately limited to 900P in to not make the other look too bad) for the latter.

In games like FFXV,GTAV,BF4,MGSV,etc PS4 generally ran at 1080P with some dip to 900P running on FullHD assets, whereas XBO ran at 900P but mostly 720P at SubHD assets.

Exclusive games like KZSF,Uncharted,KNACK ,Infamous,HZD had effect/algorithms that could only be dreamed of on XBO on top of just Resolution/FPS due to synergy of hUMA/HSA on PS4.

This was difference in PS4 having 2.2x the external RAM speed,2x the ROPs,4x the ACE, on top of 30% more TFLOPs, and a lower CPU overhead and a more efficient API/Driver (despite XBO eSRAM and slightly higher CPU/GPU clocks)

The X1X had to be stronger (by altering it's entire design paradigm) than Pro to compensate for XBO for being referred to as "Xbox720P"

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#242  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@BlackShirt20: The problem with saying it's a PS4 difference in power is only addressing shaders numbers while being completely out of context. It's like saying the PS4 is 3-5 PS360s in difference compared to the XBO albeit even greater due to differences in layout of the respective APUs. Or that PS5's Tempest Engine supposed several hundred GLOPs in raw hardware is the effectively to 2 - 3 Xbones/Couple of Zen2/RDNA2 CU/cores in HRFT Audio rendering compute wise in software . This time both consoles are using the exact baseline CPU/GPU arch with some exclusive ASICs to offload performance.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#243 ronvalencia
Member since 2008 • 29612 Posts

@Uruz7laevatein said:

@ronvalencia: Only in TFLOPs and Memory (albeit diminished due to RPM optimizations), but Geometry and ROPs (which was actually worse) didn't scale proportionally, not to mention the Jaguar CPU (and greater software overhead, which really diminishes any incentive of really going beyond 4.2 TFLOPs) was a bit of a bottleneck

1. RDR2 rendering pixel count scales from PS4 to PS4 Pro with TFLOPS increase.

2. X1X GPU has ROPS with 2MB render cache which doesn't exist for baseline Polaris IP.

3. Both X1X and PS4 Pro GPUs have quad geometry units from quad shader engine units. The difference is clock speed. X1X GPU has 28% advantage over PS4 Pro.

Avatar image for Sagemode87
Sagemode87

1455

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#244  Edited By Sagemode87
Member since 2013 • 1455 Posts

@hrt_rulz01: you're not very bright are you. Do you think Xbox is running full spec when playing indie games? Think about your comment. You're trying so hard to create a narrative you don't even realize your logic holds no water.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#245 BlackShirt20
Member since 2005 • 1642 Posts

@Uruz7laevatein: Honest question. Do you actually believe anything you’re saying? Or are you only trying to convince yourself that somehow PS5 can match Xbox Series X in GPU performance?

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#246  Edited By tormentos
Member since 2003 • 30900 Posts

@Pedro said:
@Uruz7laevatein said: For instance in real world perf the difference between PS4 and XBO was a difference between 50-100% performance (as opposed to 20-25% real world performance compared to X1X to Pro despite X1X 42% higher TFLOP count) in both exclusive (Uncharted, and Horizon) and multi-plat games (Tomb Raider Definitive Edition) deficit despite the 30% higher TFLOP number because the rest of the hardware/software contributes to overall performance increase.
Its amazing how wrong you are.
Its amazing how wrong you are.

A game that ran at 900p on xbox one and 1080p on PS4 had a 44% gap in resolution,the games that were running at 720p one xbox one and 1080p on PS4 had a more than 100% gap in resolution.

Effectively the gap on those scenarios was 44% or more performance.

Now count all the games that were 900p on xbox vs 1080p on PS4.

That doesn't include games like doom.

Getting down to the basics, as with id Tech 5 before it, Doom brings a dynamic pixel-count to the table. The idea is simple - adjust the internal rendering resolution based on the load in order to maximise performance. This helps keep the performance up even in the busiest of scenes - the same areas where a drop in resolution will be harder to pick-up visually. This highlights one area where a PS4 exhibits an advantage over Xbox One: it boasts a full 1080p output for the vast majority of the duration, with minor drops in resolution occurring in select circumstances. In contrast, Xbox One regularly struggles to hit full 1080p, more often coming in around 1472x828 or lower.

When it comes to performance, both versions aim to deliver a steady 60 frames per second update and the game comes remarkably close to delivering just that. On PlayStation 4, the majority of battles play out with only the smallest of drops. We've already presented one of the worst-case scenarios in video form, but the overall experience feels perceptually rock solid to the point where we were surprised to find any drops at all after analysing the footage.

On Xbox One, performance isn't quite as robust but it still manages to feel great. During many of the larger battles, frame-rates tumble into the mid-50s with some dips all the way down into the 40s.

1472x828 that close to 720p,in fact a spit from 1280x720 vs 1920x1080.

The game drops into the 40's while also dropping almost to 720p,can you quantify the performance gap deliver on this game? I am sure it is over 100%,keep in mind that DF state the PS4 stay almost always in 1080p and 60FPS.

I think that you people blocked the xbox one worse performance so hard that you literaly forget how bad it was.

So yeah he has a point.

900p alone vs 1080p is 44% that without faster frame or higher qulity assets.

@BlackShirt20 said:

@Uruz7laevatein:

Here is what you completely ignore. Tormentos is guilty of this as well.

You look at the PS4 Pro with 4.2 TF and Xbox One X at 6.1TF and think ok, that a decent gap. And since we are looking at 10.3TF (best case scenario), verse 12.2TF and you think ok..... the gap is even smaller.

It’s not. These new GPU’s are far more efficient this generation. The power difference is about 2 standard PS4’s or even a PS4 Pro because of the RDNA 2.0 technology.

When MS said the new Xbox would be double that of Xbox One X, many PC enthusiast said if the GPU is RDNA 2.0 the TF number would be around 10.1 because that is the Efficiency you get in these new chips over the old architecture. So the Xbox Series X is well over double that of the Xbox One X. And we are seeing that in Gears 5 demo alone after just 2 weeks running on the hardware.

STOP.

The PS5 doesn't run lower and some times hit 2.23ghz,the PS5 runs almost always at 2.23ghz 10.23TF and depending on the power requirement the GPU drops,but only drops 10% power which translate into a 3% frequenzy drop.

The problem here is that you are not READING.

That bold part is a total moronic crap.

You know why that gap is bullshit? Because both are RDNA2 so the gains applay to BOTH machines,which mean 1.8TF of difference between is minimal,and since the flop count is higher the gap while looking bigger is actually smaller.

Basic math.

1,300Gflops vs 1840Gflops is a bigger gap even that it is 540Glops than 12.2 TF vs 10.26TF even that the gap is 1880Gflops because the higher the number the smaller it actually is in %.

So 540Gflops is 40% more while 1.88TF is only 17% because the roof is higher now.

1% of 100 is 1 - 10% of 100 is 10.

1% of 1,000 is 10 -10% of 1,000 is 100.

To illustrate it better your math skills are worse than my wrinting skills.

By the way RDNA2 is not double of the PS4 and xbox hardware,if that wat true a 6TF RDNA2 would be twice as powerful as the xbox one X,in fact the xbox series X is over twice as powerful as the xbox one X.

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#247 BlackShirt20
Member since 2005 • 1642 Posts

@tormentos: Funny you say that. Because a RDNA 2.0 4.5TF matches the performance of the current Xbox One X and in some areas can surpass it. Which is about what the Series S is rumored to be.

So yes, Xbox Series X at locked 12.2TF locked is about the equivalent of almost a PS4 Pro over the PS5.

Avatar image for Uruz7laevatein
Uruz7laevatein

133

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#248  Edited By Uruz7laevatein
Member since 2009 • 133 Posts

@tormentos: IIRC the funny thing was the lower end 50% difference in multiplats/3rd party was developers deliberately trying to provide parity between PS4/XBO ("We wanted gamers on Xbox One to feel special or not left out")

Avatar image for BlackShirt20
BlackShirt20

1642

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#249 BlackShirt20
Member since 2005 • 1642 Posts

@Uruz7laevatein:

@Tormentos:

The salt is strong in you gentlemen. Go look in the mirror, tell yourself “Xbox Series X is stronger, and I must let the butthurt flow throw me, or just join the dark side and get on the strength train that is Xbox Series X!”

Avatar image for tormentos
tormentos

30900

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#250 tormentos
Member since 2003 • 30900 Posts

@BlackShirt20 said:

@tormentos: “ The xbox series X will be on top of the PS5 and that isn't debatable.”

Did you just admit the new Xbox is a better machine?

Sure i did i am not a lemming on denial.

@BlackShirt20 said:

@Uruz7laevatein: I am bettering it drops well under 10 often. Especially with as high as the GPU is being clocked.

Regardless, being 20% more powerful with RDNA 2.0 technology is the equivalent of 2 standard PS4’s or a PS4 Pro. Let that sink in.

https://www.windowscentral.com/xbox-series-x-way-more-powerful-ps5-heres-how-much-more

Oh god no wonder you are saying that shit.

Is windows central.😂

Loading Video...

Here educate your self.

This GPU are RDNA1 which mean that they are close to RDNA2 which only has 15% IPC inprovement over RDNA.

The gap between this 2 GPU is 1.8TF one is 9.7TF and the other 7.9TF.

The PS4 is 1.8TF so the gap between this 2 GPU is a complete PS4 just like you claim here,and look at the difference if you DARE ACO only 6FPS more lol.

Metro 6 FPS.

FH4 15FPS This one is meaning less considering both run over 100 FPS.

HItman 10FPS.

PUB 13FPS

RDR2 - 8 FPS

BF5 - 7 FPS

Control 7 FPS

Witcher 3 - 9 FPS.

So this ^^ is what a complete PS4 in power amounts to.

You will be more letdown wating for that gap to be huge,than we were when we learn the PS5 was 36CU.🤣