Here's how Microsoft's $500 Xbox One X compares to a PC

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#51 scatteh316
Member since 2004 • 10273 Posts

@howmakewood said:

@scatteh316: think the best metric would be to get the avg bandwidth during one frame for both 33ms and 16ms, but I do pack up your point that the gpu will most likely never have access to the full bandwidth

You have to work for worst case scenerio though, working to an average figure doesn't leave head room for when things get hectic which results in performance problems.

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#52 slimdogmilionar
Member since 2014 • 1343 Posts

@Orchid87: it's not a high step, one card is marketed as 1080p and one is marketed as 1440p. Even still if the X1 can do 4k even if it's not in every game that still puts it in the same ballpark as the 1070, the only gpu that can push 4k30 @ decent settings is the 1080 and still some games need sli 1080s to hit 4k60.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 scatteh316
Member since 2004 • 10273 Posts

@slimdogmilionar said:

@scatteh316: Compared to rx480 it is, I have a 480 and you can't even hit 1440p without turning down settings, mind you that's a 6tflop card paired with an i5. But you don't understand what he's saying about memory bandwidth, this is why he keeps referencing Nvidia because the 1070 performs better than the 480 and 580 with the same bandwidth, same with the 1080 it only has a 256 bit bus, but can do 4K. Nvivdias memory compression is insane, and for months he's been telling you guys that MS has stole pages from Nvidia and Amd in order to fix the bandwidth problems AMD gpu have. Not to mention that the higher the resolution goes the less CPU has to do with performance. Look at the 1060 vs 480 for reference, the 1060 is able to match and sometimes beat the 480 even though it only has 192 bit vs 480 256.

I think it is you that doesn't understand my argument with his numbers, would you like me to explain?

Avatar image for howmakewood
Howmakewood

7713

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#54  Edited By Howmakewood
Member since 2015 • 7713 Posts

@scatteh316 said:
@howmakewood said:

@scatteh316: think the best metric would be to get the avg bandwidth during one frame for both 33ms and 16ms, but I do pack up your point that the gpu will most likely never have access to the full bandwidth

You have to work for worst case scenerio though, working to an average figure doesn't leave head room for when things get hectic which results in performance problems.

yeh obviously if the title is very cpu heavy on top of running 60fps you get different results

Avatar image for whalefish82
whalefish82

511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#55  Edited By whalefish82
Member since 2013 • 511 Posts

These "can you build a PC comparable with a console for a similar price?" articles are beyond tedious now. I would never recommend skimping on price when building a PC, because it'll only cost you more in the long term with upgrades. A higher initial outlay is mostly negated by free online and cheaper games anyway.

Quite honestly, if you just want to play AAA games and watch stuff, I would say a console is your best bet right now with the release of the Pro and One X. The thing I love about PC the most is the versatility. I write on it, make music, edit photos and videos, on top of having the best gaming experience. You have to take into account that the high cost of a good PC gives you the opportunity to do so much more than just play games and watch movies/TV.

Avatar image for 22Toothpicks
22Toothpicks

12546

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#56  Edited By 22Toothpicks
Member since 2005 • 12546 Posts

It is a pretty sweet deal considering the hardware but its still a $500 console. I think ms made a huge mistake by listening to forum dwellers that are overly concerned with resolution and other numbers. Ms fails to realize that games and marketing are what sell a console.

Your average consumer doesnt know or care about these technical comparisons. I can see the diehards buying this and thats about it.

Edit: also ron's obnoxious posting style should be considered disruptive at this point

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#57  Edited By ronvalencia
Member since 2008 • 29612 Posts

@scatteh316 said:
@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

Your argument doesn't reflect real world typical sync rendering workloads. The GPU can't do anything without the CPU completing it's task. When GPU is idle, it's not using memory bandwidth.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

And if you're not able to work that out can you ackonweldge that the GPU in Xbox-X does not have exclusive 100% use of the 326Gb/s system bandwidth and that the CPU does use some of this resulting in your bandwidth per Tflop figure for Xbox-X is incorrect.

The very nature of producer (CPU) and consumer (GPU) process model precludes 100 percent GPU usage.

My memory scaling already has GPU wait states (wasting clock cycles) built in. For every CPU ms consumption, it's less TFLOPS for GPU for a given frame time target i.e. GPU is waiting on the CPU event. This is effective TFLOPS vs theoretical TFLOPS subjects i.e. CPU's ms consumption can reduce GPU's effective TFLOPS to complete it's work within 16.6 ms or 33.3 ms frame time target. Higher the GPU compute power = less time to completion.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

There's a situation where the GPU has full effective memory bandwidth usage e.g. CPU exclusively communicating via fusion links to the GPU and CPU's data sets are optimized for 4 MB L2 cache boundary.

Again http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

"For 4K assets, textures get larger and render targets get larger as well. This means a couple of things - you need more space, you need more bandwidth," explains Nick Baker. "The question though was how much? We'd hate to build this GPU and then end up having to be memory-starved. All the analysis that Andrew was talking about, we were able to look at the effect of different memory bandwidths, and it quickly led us to needing more than 300GB/s memory bandwidth. In the end we ended up choosing 326GB/s. On Scorpio we are using a 384-bit GDDR5 interface - that is 12 channels. Each channel is 32 bits, and then 6.8GHz on the signalling so you multiply those up and you get the 326GB/s."

Scorpio's GPU has more than 300 Gbps memory bandwidth allocated to it. Deal with it.

Polaris DCC recovers memory bandwidth lost to memory subsystem inefficiencies.

AMD CPU only needs to writes to main memory for when there's modified L2 cache line eviction. Any cache line snoop request from the GPU can be transferred via the fusion links.

http://www.dualshockers.com/naughty-dog-explains-ps4s-cpu-memory-and-more-in-detail-and-how-they-can-make-them-run-really-fast/

CPU's access to main memory can be costly hence the importance for respecting L2 cache boundary. It's CELL SPE's respect on-chip local memory optimization technique revisited!

Again, shared memory PS4 vs Non-shared memory R7-265.

http://www.eurogamer.net/articles/digitalfoundry-2016-we-built-a-pc-with-playstation-neo-gpu-tech

We have a Sapphire R7 265 in hand, running at 925MHz - down-clock that to 900MHz and we have a lock with PS4's 1.84 teraflops of compute power.

In our Face-Offs, we like to get as close a lock as we can between PC quality presets and their console equivalents in order to find the quality sweet spots chosen by the developers. Initially using Star Wars Battlefront, The Witcher 3 and Street Fighter 5 as comparison points with as close to locked settings as we could muster, we were happy with the performance of our 'target PS4' system. The Witcher 3 sustains 1080p30, Battlefront hits 900p60, SF5 runs at 1080p60 with just a hint of slowdown on the replays - just like PS4. We have a ballpark match

...

at straight 1080p on the R7 265-powered PS4 surrogate. Medium settings is a direct match for the PS4 version here and not surprisingly, our base-level PS4 hardware runs it very closely to the console we're seeking to mimic

...

Take the Witcher 3, for example. Our Novigrad City test run hits a 33.3fps average on the R7 265 PS4 target hardware - pretty much in line with console performance.

For Witcher 3, SW battlefront and SF5, there's very minimal difference between shared memory PS4 vs non-shared memory R7-265.

--------------------------------

The argument for a split between 256 bit + 128 Bit memory setup VS unified 384 bit memory setup is similar argument to classic 256 pixel shaders + 128 vertex shaders vs 384 unified shaders.

RX-580 can't go beyond it's 256 bit bus limits while X1X can go beyond 256 bit bus limit. Strike one against RX-580.

http://www.eurogamer.net/articles/digitalfoundry-2017-project-scorpio-tech-revealed

We quadrupled the GPU L2 cache size, again for targeting the 4K performance."

X1X GPU's 2MB L2 cache was used to reach 4K performance.

Existing AMD GPU's pixel engines are not even connected to L2 cache. Strike two against RX-580.

R9-390X's L2 cache is 1 MB in size and not connected to L2 cache. Double strike against R9-390X.

My old R9-390X couldn't match Scorpio in Forza M6 nurburgring wet track result, this is why ditched my R9-390X.

http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Adding to the list of enhancements, Microsoft increased performance in CPU/GPU coherency and enhanced and improved the speed of the GPU command processor to offload a lot of work from the CPU too, specifically with DirectX 12 engines. However, looking at the layout of the Scorpio Engine, the proportion of the space occupied by the GPU dwarfs the CPU area. The ratio is much, much larger than both Xbox One and PlayStation 4.

Moving CPU rendering logic to GPU's custom command processor (microcode capable) reduces the context switch overheads and CPU workload.

Avatar image for kuu2
kuu2

12063

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#58 kuu2
Member since 2005 • 12063 Posts

The games so far have proven how powerful the system is at this point.

It runs games at 4k end of story.

You can go on with your charts, graphs, numbers of other cards, and whatever conjecture you wish to throw in.

The truth is MSoft accomplished what they set out to do and it's because they have some of the smartest engineers on the planet working for them.

Keep dissecting the One X and trying to figure out how they did this with a supposed weak CPU and $500. I find it funny at this point for those still in denial.

Avatar image for nygamespotter
nygamespotter

523

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#59 nygamespotter
Member since 2016 • 523 Posts

That's an awful comparison. And there's no way a 1060 will compete with this. But sure, keep telling yourself 'Master Race'.

Avatar image for deactivated-5c0b07b32bf03
deactivated-5c0b07b32bf03

6005

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#60 deactivated-5c0b07b32bf03
Member since 2014 • 6005 Posts

Y'all just got Ron'd hardcore. Straight up Ron-bomb just dropped on this bitch.

Avatar image for HalcyonScarlet
HalcyonScarlet

13668

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#61 HalcyonScarlet
Member since 2011 • 13668 Posts

@Juub1990 said:
@scatteh316 said:

He really doesn't..... he doesn't even understand how bandwidth allocation works in a modern console, hence me calling him out on it and thus far he's done everything he can to dodge it as he knows I have him by the balls.

I can and have shown him up in a few threads now.

He does but he's a fanboy for Microsoft and AMD and is completely biased when it comes to them. Also he seems to not grasp English 100% so I suspect he doesn't quite understand what you're asking.

Not to mention he skim reads posts and then goes on, and then you realise he misread the original post.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#62  Edited By ronvalencia
Member since 2008 • 29612 Posts

@scatteh316 said:
@howmakewood said:

@scatteh316: the allocation isn't static

@lglz1337: do share the fun

No it's not but it has a maximum value which can be calculated, iirc there was a figure of 25Gb/s floating around for OG Xbone's maximum CPU useage when the processor is at maximum capacity.

Xbox-X's CPU is faster still but even though processor load is dynaimc is still uses bandwidth which is why Ron's figures are a complete joke and just flat out wrong but he won't admit this.

The only joke is your argument.

http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

"For 4K assets, textures get larger and render targets get larger as well. This means a couple of things - you need more space, you need more bandwidth," explains Nick Baker. "The question though was how much? We'd hate to build this GPU and then end up having to be memory-starved. All the analysis that Andrew was talking about, we were able to look at the effect of different memory bandwidths, and it quickly led us to needing more than 300GB/s memory bandwidth. In the end we ended up choosing 326GB/s. On Scorpio we are using a 384-bit GDDR5 interface - that is 12 channels. Each channel is 32 bits, and then 6.8GHz on the signalling so you multiply those up and you get the 326GB/s."

Scorpio's GPU has more than 300 Gbps memory bandwidth allocated to it. Polaris DCC recovers memory bandwidth lost to memory subsystem inefficiencies. MS has multiple measures to make "more than 300 Gbps memory bandwidth" requirement for the GPU e.g. memory compression.

If Kinect/TruAudio DSP's ESRAM is retained on X1X, that's reduced main memory hit rate.

MS's hardware customization.

http://www.eurogamer.net/articles/digitalfoundry-vs-the-xbox-one-architects

"On the GPU we added some compressed render target formats like our 6e4 [6 bit mantissa and 4 bits exponent per component] and 7e3 HDR float formats [where the 6e4 formats] that were very, very popular on Xbox 360, which instead of doing a 16-bit float per component 64bpp render target, you can do the equivalent with us using 32 bits - so we did a lot of focus on really maximising efficiency and utilisation of that ESRAM."

XBO's GPU has additional features over other GCNs e.g. 64 bits (FP16 bits per component) ---> compressed into 32 bits.

When XBO's compressed data format customization was combined with 6 TFLOPS brute force(overcoming XBO's GPU ALU bound issues) and Polaris DCC improvements, it would beat PC's R9-390X and RX-580.

Avatar image for Truth_Hurts_U
Truth_Hurts_U

9703

Forum Posts

0

Wiki Points

0

Followers

Reviews: 8

User Lists: 0

#63 Truth_Hurts_U
Member since 2006 • 9703 Posts

Peasant problems... I'm sitting here with my PC that runs Ultra settings... Laughing about people bickering over low end parts.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#64 scatteh316
Member since 2004 • 10273 Posts

@ronvalencia said:
@scatteh316 said:
@howmakewood said:

@scatteh316: the allocation isn't static

@lglz1337: do share the fun

No it's not but it has a maximum value which can be calculated, iirc there was a figure of 25Gb/s floating around for OG Xbone's maximum CPU useage when the processor is at maximum capacity.

Xbox-X's CPU is faster still but even though processor load is dynaimc is still uses bandwidth which is why Ron's figures are a complete joke and just flat out wrong but he won't admit this.

The only joke is your argument.

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

Your argument doesn't reflect real world typical sync rendering workloads. The GPU can't do anything without the CPU completing it's task. When GPU is idle, it's not using memory bandwidth.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

And if you're not able to work that out can you ackonweldge that the GPU in Xbox-X does not have exclusive 100% use of the 326Gb/s system bandwidth and that the CPU does use some of this resulting in your bandwidth per Tflop figure for Xbox-X is incorrect.

The very nature of producer (CPU) and consumer (GPU) process model precludes 100 percent GPU usage.

My memory scaling already has GPU wait states (wasting clock cycles) built in. For every CPU ms consumption, it's less TFLOPS for GPU for a given frame time target i.e. GPU is waiting on the CPU event. This is effective TFLOPS vs theoretical TFLOPS subjects i.e. CPU's ms consumption can reduce GPU's effective TFLOPS to complete it's work within 16.6 ms or 33.3 ms frame time target. Higher the GPU compute power = less time to completion.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

There's a situation where the GPU has full effective memory bandwidth usage e.g. CPU exclusively communicating via fusion links to the GPU and CPU's data sets are optimized for 4 MB L2 cache boundary.

Again http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Other people have acknowledged your short comings in your data, try harder!

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#65 slimdogmilionar
Member since 2014 • 1343 Posts

@scatteh316: please do explain, because I'm finding a hard time understanding what you're trying to prove exactly. Is it that 1x doesn't have more bandwidth than Pspro, or do you believe that after CPU bandwidth the X doesn't have enough left for native 4k?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#66  Edited By ronvalencia
Member since 2008 • 29612 Posts

@scatteh316 said:
@ronvalencia said:
@scatteh316 said:
@howmakewood said:

@scatteh316: the allocation isn't static

@lglz1337: do share the fun

No it's not but it has a maximum value which can be calculated, iirc there was a figure of 25Gb/s floating around for OG Xbone's maximum CPU useage when the processor is at maximum capacity.

Xbox-X's CPU is faster still but even though processor load is dynaimc is still uses bandwidth which is why Ron's figures are a complete joke and just flat out wrong but he won't admit this.

The only joke is your argument.

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

The very nature of producer (CPU) and consumer (GPU) process model precludes 100 percent GPU usage.

My memory scaling already has GPU wait states (wasting clock cycles) built in. For every CPU ms consumption, it's less TFLOPS for GPU for a given frame time target i.e. GPU is waiting on the CPU event. This is effective TFLOPS vs theoretical TFLOPS subjects i.e. CPU's ms consumption can reduce GPU's effective TFLOPS to complete it's work within 16.6 ms or 33.3 ms frame time target. Higher the GPU compute power = less time to completion.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

There's a situation where the GPU has full effective memory bandwidth usage e.g. CPU exclusively communicating via fusion links to the GPU and CPU's data sets are optimized for 4 MB L2 cache boundary.

Again http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Other people have acknowledged your short comings in your data, try harder!

Xbox One X beating GTX 980 Ti Super JetStream and Titan X Maxwell OC in ARK Survival Evolved.

http://www.tweaktown.com/news/57990/ark-dev-xbox-one-horsepower-phenomenal/index.html

Studio Wildcard's Jesse Rapczak was recently on the E3 2017 Live Show with Geoff Keighley, where he said that they're aiming for 60FPS on Xbox One X, and that 60FPS was "not possible" on the PS4 Pro. PS4 Pro users can run 60FPS, but only at 720p where we'll see ARK running at 1080p 60FPS on close to Epic settings on Xbox One X.

During the interview, Rapczak said: "It's amazing. It's basically like Epic settings on PC, we're targeting 60FPS. We recently introduced variable frame rate, but on Xbox One X with the extra horsepower, you'll notice less variation in your frame rate even in busy situations".

Rapczak added: ":The PS4 Pro also has improved frame rate when compared to PS4. Before the XB1X, it was the best console version of ARK, but it's hard to argue with that 6 teraflops [laughs]. Having the same frame rate in our game isn't really possible, there are a couple reasons for that: the XB1X not only has a faster CPU than PS4 Pro, it also has a lot more RAM. I'd say there's a 50% performance difference".

When it comes to the Xbox One X, his comments build hype: "The Xbox One X is great. Everything from the devkits is so much easier to develop for. It's faster in all sorts of ways, especially for us developers it's easier to iterate. In terms of horsepower, it's just phenomenal, it is expensive but when you think about the price for a similarly specced PC, it doesn't unreasonable at all to me. Overall, the quality is going up a lot. It would have been easy to say, hey, let's just take the Xbox One version and crank up the resolution, but actually we've brought over some enhanced effects from PC. We've actually got volumetric lighting and clouds on Xbox One X, higher draw distance that is equal to PC's Epic settings, higher resolution textures...".

X1X has beaten reference GTX 980 Ti

http://www.pcgameshardware.de/ARK-Survival-Evolved-Spiel-55571/Specials/Benchmark-Test-2016-1182869/

X1X has beaten reference GTX 980 Ti Super JetStream and Titan X Maxwell OC

Deal with it.

Avatar image for Syferonik
Syferonik

3060

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#67 Syferonik
Member since 2006 • 3060 Posts

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."


Pathetic

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#68 ronvalencia
Member since 2008 • 29612 Posts

@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#69 scatteh316
Member since 2004 • 10273 Posts

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:
@scatteh316 said:
@howmakewood said:

@scatteh316: the allocation isn't static

@lglz1337: do share the fun

No it's not but it has a maximum value which can be calculated, iirc there was a figure of 25Gb/s floating around for OG Xbone's maximum CPU useage when the processor is at maximum capacity.

Xbox-X's CPU is faster still but even though processor load is dynaimc is still uses bandwidth which is why Ron's figures are a complete joke and just flat out wrong but he won't admit this.

The only joke is your argument.

@ronvalencia said:
@scatteh316 said:
@ronvalencia said:

The very nature of producer (CPU) and consumer (GPU) process model precludes 100 percent GPU usage.

My memory scaling already has GPU wait states (wasting clock cycles) built in. For every CPU ms consumption, it's less TFLOPS for GPU for a given frame time target i.e. GPU is waiting on the CPU event. This is effective TFLOPS vs theoretical TFLOPS subjects i.e. CPU's ms consumption can reduce GPU's effective TFLOPS to complete it's work within 16.6 ms or 33.3 ms frame time target. Higher the GPU compute power = less time to completion.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

There's a situation where the GPU has full effective memory bandwidth usage e.g. CPU exclusively communicating via fusion links to the GPU and CPU's data sets are optimized for 4 MB L2 cache boundary.

Again http://www.eurogamer.net/articles/digitalfoundry-2017-the-scorpio-engine-in-depth

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Other people have acknowledged your short comings in your data, try harder!

Xbox One X beating GTX 980 Ti Super JetStream and Titan X Maxwell OC in ARK Survival Evolved.

http://www.tweaktown.com/news/57990/ark-dev-xbox-one-horsepower-phenomenal/index.html

Studio Wildcard's Jesse Rapczak was recently on the E3 2017 Live Show with Geoff Keighley, where he said that they're aiming for 60FPS on Xbox One X, and that 60FPS was "not possible" on the PS4 Pro. PS4 Pro users can run 60FPS, but only at 720p where we'll see ARK running at 1080p 60FPS on close to Epic settings on Xbox One X.

During the interview, Rapczak said: "It's amazing. It's basically like Epic settings on PC, we're targeting 60FPS. We recently introduced variable frame rate, but on Xbox One X with the extra horsepower, you'll notice less variation in your frame rate even in busy situations".

Rapczak added: ":The PS4 Pro also has improved frame rate when compared to PS4. Before the XB1X, it was the best console version of ARK, but it's hard to argue with that 6 teraflops [laughs]. Having the same frame rate in our game isn't really possible, there are a couple reasons for that: the XB1X not only has a faster CPU than PS4 Pro, it also has a lot more RAM. I'd say there's a 50% performance difference".

When it comes to the Xbox One X, his comments build hype: "The Xbox One X is great. Everything from the devkits is so much easier to develop for. It's faster in all sorts of ways, especially for us developers it's easier to iterate. In terms of horsepower, it's just phenomenal, it is expensive but when you think about the price for a similarly specced PC, it doesn't unreasonable at all to me. Overall, the quality is going up a lot. It would have been easy to say, hey, let's just take the Xbox One version and crank up the resolution, but actually we've brought over some enhanced effects from PC. We've actually got volumetric lighting and clouds on Xbox One X, higher draw distance that is equal to PC's Epic settings, higher resolution textures...".

X1X has beaten reference GTX 980 Ti

http://www.pcgameshardware.de/ARK-Survival-Evolved-Spiel-55571/Specials/Benchmark-Test-2016-1182869/

X1X has beaten reference GTX 980 Ti Super JetStream and Titan X Maxwell OC

Deal with it.

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Instead of changing the subject because you know your figures are wrong.

Avatar image for Syferonik
Syferonik

3060

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#70 Syferonik
Member since 2006 • 3060 Posts

@ronvalencia said:
@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Who cares about that garbage dev? They can't program for shit. Their game is a mess.

If Rockstar were to praise the system because RdR2 ran impressively well, then it would valid. Not some random ass dev that delivers more bugs than anything else.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#71 scatteh316
Member since 2004 • 10273 Posts

@Syferonik said:
@ronvalencia said:
@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Who cares about that garbage dev? They can't program for shit. Their game is a mess.

If Rockstar were to praise the system because RdR2 ran impressively well, then it would valid. Not some random ass dev that delivers more bugs than anything else.

It's not that, he just wants babies with Scorpio hence why he defends it like he does.

He's been like this since it was announed.

Avatar image for drummerdave9099
drummerdave9099

4606

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#73 drummerdave9099
Member since 2010 • 4606 Posts

@metalslimenite said:

For the price, you can't beat what XB1X has to offer, tech and quality wise.

I think they missed the price point. I know the tech can really determine the price and cost of manufacturing and all that, but if it was going to be $500 I wanted them to provide a huge list of games getting improvements and a handful of new announcements of upcoming 1st party/exclusive games. We didn't really get either. Ori 2 will be a day one buy though.

$400 or less would have made me consider

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#74 ronvalencia
Member since 2008 • 29612 Posts

@Syferonik said:
@ronvalencia said:
@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Who cares about that garbage dev? They can't program for shit. Their game is a mess.

If Rockstar were to praise the system because RdR2 ran impressively well, then it would valid. Not some random ass dev that delivers more bugs than anything else.

Too bad for you, X1X has higher fps than PS4 Pro.

Avatar image for Syferonik
Syferonik

3060

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#75 Syferonik
Member since 2006 • 3060 Posts

@ronvalencia said:
@Syferonik said:
@ronvalencia said:
@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Who cares about that garbage dev? They can't program for shit. Their game is a mess.

If Rockstar were to praise the system because RdR2 ran impressively well, then it would valid. Not some random ass dev that delivers more bugs than anything else.

Too bad for you, X1X has higher fps than PS4 Pro.

okaaaaaay so? I don't own a pro. And the X is $500.

Avatar image for oflow
oflow

5185

Forum Posts

0

Wiki Points

0

Followers

Reviews: 40

User Lists: 0

#76 oflow
Member since 2003 • 5185 Posts

@howmakewood said:

@tdkmillsy: strictly performance wise these budget PC's are nothing but trash, if you actually want performance you better lay in the dough, but at least they have ssd in the budget build... otherwise it's looking to maybe do 60fps on most(not all) new games on 1080p because the gpu is trash on anything higher

yep. thats why I hate when hermits post these potato rigs as a counter to buying a console when they themselves own a $2000+ rig. Be honest and stop trying to sell shit to other people who actually are trying to get into PC gaming. A decent gaming rig is gonna cost you in the $1500 range for one with decent components and bells and whistles.

They're setting people up for a bad time with this. They also never count the cost of the OS or a decent mouse and keyboard or decent sound output. Thats another $200+ on the price. Telling someone to buy one of those shit $15 keyboards and a $15 mouse is being an ass.

PC gaming is the best because you get what you pay for.

Avatar image for Peanutbutterz
Peanutbutterz

219

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#77  Edited By Peanutbutterz
Member since 2011 • 219 Posts

@howmakewood: For most people already with desktops, all they would need to do to outclass consoles is upgrade their gpu and be good. That is < $500. And if you do a full build, it's not like you have to build a pc from scratch every 3-4 years. I've been using the same PC for the past 4 years. I upgrade parts here and there, and been doing fine.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#78 scatteh316
Member since 2004 • 10273 Posts

@ronvalencia said:
@scatteh316 said:
@Syferonik said:
@ronvalencia said:
@Syferonik said:

"Compared to a desktop CPU, the Xbox One X's CPU will be a substantial bottleneck. The eight Jaguar CPU cores will offer one-half to one-third the performance of AMD's new Ryzen CPUs. As we wrote when the Scorpio specs were first revealed, all eight cores working together will maybe roughly equal the performance of an Intel i3 CPU."

Pathetic

X1X's CPU is not a large bottleneck for open world unreal engine 4 based ARC Survival Evolved.

Who cares about that garbage dev? They can't program for shit. Their game is a mess.

If Rockstar were to praise the system because RdR2 ran impressively well, then it would valid. Not some random ass dev that delivers more bugs than anything else.

It's not that, he just wants babies with Scorpio hence why he defends it like he does.

He's been like this since it was announed.

You started a personality war.

Yay for pictures....

Now can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Avatar image for appariti0n
appariti0n

5013

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#79 appariti0n
Member since 2009 • 5013 Posts

@oflow: Mostly agree, but I'd consider a $1500 tower more than decent, yeah?

Though if you're going to make your first PC purchase, sinking some extra dough in, especially for a better case and power supply is never a bad investment.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#80 gamecubepad
Member since 2003 • 7214 Posts

I did a comparable build to Scorpio.

That is: win10, mini-itx case, 600W bronze modular PSU, i3/i5 with cheap mobo, 8GB DDR4-2133(upgrade to 16GB later), 1TB HDD, UHD BD Drive, Xbox One Gamepad(bluetooth) and bluetooth receiver.

That's $600 for i3, $670 for i5...without a GPU. Must have at least GTX 1060/RX 580 with excellent OC. That's $850-920.

You'd need a GTX 1070 or upcoming Vega to "beat" the Scorpio.

Avatar image for mariokart64fan
mariokart64fan

20828

Forum Posts

0

Wiki Points

0

Followers

Reviews: 101

User Lists: 1

#81 mariokart64fan
Member since 2003 • 20828 Posts

@scatteh316: first who cares how Xbox one compares to a PC it can't be compared because PC's do more than just play games they have alot going on

The Xbox one x don't have alot going on so it doesn't need to have a ridiculously large amount of ram

Etc just make new games but what do Microsoft do. Oh ya no exclusive nothing

Avatar image for Gatygun
Gatygun

2709

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#82 Gatygun
Member since 2010 • 2709 Posts

Ark dev also told its variable fps. Which can basically mean 60fps it hits while looking at a rock on the ground with 10fps while looking forwards. The devs are noctorious with being horrible on the optimisation part.

I believe there original game on the xbox one runs average at 18fps, that's how bad it is.

Also about PC building.

If you already have a PC, with good parts. The only thing you really need to upgrade is motherboard + memory + gpu + cpu. That's what i mostly do.

When i do upgrade my powersupply i make sure i can keep it for 10 years, so if the pc i build at that moment needs 600w, i will buy a 1000w psu. Sure it costs me a bit more money, but it makes it so i don't have to upgrade. My last PSU i bought was in 2008 and it was a 1000w psu, still to this day it runs all my hardware perfectly fine.

A motherboard can be bought for about 150 bucks ( a good one ), memory is about another 150 bucks, gpu is about 300-400 bucks, cpu is about 200-300 bucks. Which will push you into high end gaming segment. If you want budget you can do it this way.

If you however need to build a pc from scratch, a 500 dollar machine is going to be utter crap.

Also about beating the scorpio.

Who wants to play games on pc front at 4k? you are better off playing it at 1080p and get more bells and wisles going. Unless you sit straight infront of a massive screen 1440p would still be a better option. There is so much better stuff to get then 4k.

4k has been nothing but a useless buzz word these days.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#83 tormentos
Member since 2003 • 33784 Posts

@ronvalencia said:

The article from PCgamer didn't factor in RX-580's memory bottleneck situation. RX-580 is not the first Polaris 10* with 6 TFLOPS i.e. MSI RX-480 GX has 6.06 TFLOPS.

Read http://gamingbolt.com/ps4-pro-bandwidth-is-potential-bottleneck-for-4k-but-a-thought-through-tradeoff-little-nightmares-dev

PS4 Pro's has 4.2 TFLOPS with 218 GB/s hence the ratio is 51.9 GB/s per 1 TFLOPS and it's already memory bottlenecked.

RX-580 has 6.17 TFLOPS with 256 GB/s hence the ratio is 41.49 GB/s per 1 TFLOPS and it's memory bottleneck is worst than PS4 Pro.

X1X has 6 TFLOPS with 326 GB/s hence the ratio is 54.3 GB/s per 1 TFLOPS.

RX-480 (5.83 TFLOPS)'s 264 GB/s effective bandwidth, hence very close to R9-290 (4.8 TFLOPS) and R9-290X (5.6 TFLOPS) results.

I welcome for PCgamer's Tuan Nguyen to debate me.

PS; R9-390X's effective memory bandwidth is about 315.6 GB/s and has no DCC (Delta Color Compression) features.

Check list

1. I have game programmer's point of view on memory bottlenecks with PS4 Pro.

2. I have both AMD and NVIDIA PC benchmarks showing memory bottlenecks for Polaris 10 XT. I have cached additional graphs showing similar patterns e.g. F1 2016. Forza Horizon 3 and 'etc'. I'm ready for General Order 24/Base Delta Zero.

My god you don't fu**ing get tire of getting owned.?

Ryzen,Vega,fp16, $399 based on Soc size,you just don't get tire of been wrong.

What RX580 bandwidth bottleneck link me to it please and stop inventing shit,link me as the RX580 has the same bandwidth as the RX480 yet perform better.

Why do you bring the Pro and link the Pro on this thread you sure don't learn your lesson,you always do this shit and end up owned,quote a developer with god know what intentions and claims it to be gods word,you did the same with DF which you quote when it serve you best,but when they claim Scorpio would have a CPU bottleneck again you freak and started to attack it...

Did you take into account that 1TF used for FP16 process would yield 2TF of performance while using the same exact bandwidth? Did you? Oh no i am sure you didn't because you are a blind fanboy that put pitfalls on the PS4 hardware but over hype shitty cosmetic crap on xbox hardware you have almost 4 years doing that shit.

Jit Compression.

Tile Resources.

Data move engines

Audio Block

ESRAM

150mhz faster CPU

DX12

The cloud

Basically you have try to hype every single crap on xbox one and all served for nothing,and you are doing the same here.

The RX580 has 345GB/s effective bandwidth factoring in DCC the same as the RX480 which you love to pretend that they don't have DCC and that some how is just 256,so it has 345GB/s effective to it self that is more than the 315GB's you falsely claim for the R390X yet the R390X beat it by a frame in some games in 4k,you know that there are instances were more CU is actually more beneficial than having less CU with higher clock?

You don't even fu**ing know for fact what is the CPU reservation on Scorpio DF hasn't asked which i find odd also,but here you are parading as if Scorpio had all bandwidth to it self.

You don't have the standing to challenge any one to a debate,he clearly saw what we all saw a CPU bottleneck and a shared memory bandwidth that on PC doesn't exist.

And considering that you ass has been handed to you by me this year more times that i can remember even more,you simply and blindly assume crap based on nothing but your biased opinion,for you MS and AMD are gods gift to earth.

You are wrong and you omit things on purpose when it doesn't serve your best,i told you Scorpio would be $500 but you did not listen.

@ronvalencia said:

From https://www.techpowerup.com/forums/threads/article-just-how-important-is-gpu-memory-bandwidth.209053/

GTX 970's (FarCry 4 @ 1440p Very High) memory bandwidth usage. GTX 970 has 224 GB/s physical memory bandwidth.

Note why Microsoft aimed for more than 300 GB/s for 4K.

PS4 Pro's 218 GB/s is fine for 1080p and 1440p.

3x Owned now noob.

Please stop the PS4 has 4k native games with 218GB/s effective bandwidth of the PS4 Pro is 218GB/s + 35% from DDC which = 294GB/s that is without taking into account that the PS4 Pro FP16 feature can pass 2X performance over the same bandwidth.

So needing 300GB/s is a joke,and again you are assuming one what MS claims without actual clarification they say more than 300GB/s they were talking about the complete system not just GPU it is impossible that MS give just 26GB/s to CPU when the XBO has 30GB/s with 1.7ghz,but since you are a blind fanboy you don't look into that.

Scorpio has 326Gb/s SHARED and you don't know how much is reserve for system and CPU,certainly isn't zero like you once try to claim lemming,the RX580 has 256GB/s for GPU + more than 50GB/s + from DDR4.

256Gb/s + 50GB's hell look at freaking skylake 60+GB/s read and almost 60GB/s writes as well,why in hell you think Ryzen benefit so much from DDR4 over DDR3.?

So you want to argue that shitty Jaguar has ryzen like performance but some how without using bandwidth in the same way Ryzen use it.?

I predict minimum 30GB/s like the XBO but it could be more.

@scatteh316 said:

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Oh he simply will play blind i have more than year arguing with him about his totally bullshit bandwidth claims of Scorpio,he even try to argue that CPU would us Zero bandwidth...hhahahahaa

Because sony used a chart showing from zero to more than 30Gb/s he some how think that it would use zero bandwidth..

The guy is completely delusional and sold on MS PR bullshit,in fact he even invent crap based on nothing but a miss quote,he assumes Scorpio had a Ryzen CPU,a Vega GPU and FP16 only because Phil Spencer claim they wanted to do something different to the pro which ended been a lie,since they chose Polaris like the Pro and Jaguar like the Pro on a 16NM process both,they just uses more ram but it is clear that Rondementia loves to assume crap and what he doesn't assumes he invent it,like he did with the Price of scorpio which i claim $500 and many other and he claim the Soc for both had only 12% difference on size so that mean it would be $400,he downplayed 4k blu-ray,extra ram,better cooling and extra CU as well,for him everything on Scorpio was as expensive as the Pro..lol

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#84 tormentos
Member since 2003 • 33784 Posts

@slimdogmilionar said:

@Orchid87: it's not a high step, one card is marketed as 1080p and one is marketed as 1440p. Even still if the X1 can do 4k even if it's not in every game that still puts it in the same ballpark as the 1070, the only gpu that can push 4k30 @ decent settings is the 1080 and still some games need sli 1080s to hit 4k60.

I don't have a problem admitting Scorpio will have more games at 4k how much better setting it is up to see since reaching for 4k while the PS4 Pro is 1800P will probably eat most of scorpio extra power,it has only 43% more power,while 4k to 1800p the difference is 44% more pixels,scorpio will have higher quality textures because of having more ram,and probably a little better AA to because of bandwidth,but from there to claim Scorpio is head to head vs a 1070gtx is a joke is not is based on Polaris which doesn't kick that high at 6TF regardless of bandwidth.

Avatar image for getslaidalot
GetsLaidAlot

121

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#85 GetsLaidAlot
Member since 2017 • 121 Posts

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

Avatar image for MonsieurX
MonsieurX

39858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#86 MonsieurX
Member since 2008 • 39858 Posts

@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

can't compare

Avatar image for howmakewood
Howmakewood

7713

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#87 Howmakewood
Member since 2015 • 7713 Posts

@Peanutbutterz said:

@howmakewood: For most people already with desktops, all they would need to do to outclass consoles is upgrade their gpu and be good. That is < $500. And if you do a full build, it's not like you have to build a pc from scratch every 3-4 years. I've been using the same PC for the past 4 years. I upgrade parts here and there, and been doing fine.

I obviously know that but it was from a completely fresh view point, if you check my sig I'm still rocking 2600k i7(older than og xbone/ps4) on my main gaming pc because I haven't bothered to swap it out(tho I'm planning to swap it out maybe for the upcoming 7800x, have a 4790k even on my livingroom's mini itx case

Avatar image for drlostrib
DrLostRib

5931

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#88 DrLostRib
Member since 2017 • 5931 Posts

why are we having a comparison for a console whose performance is still mostly unknown?

Avatar image for howmakewood
Howmakewood

7713

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#89 Howmakewood
Member since 2015 • 7713 Posts

@oflow: Yeh I'm totally with you on that

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#90 gamecubepad
Member since 2003 • 7214 Posts

@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

The answer is overwhelming...YES. Specs are worth more than $500. You'd be spending $800-900 for a comparable PC build and it still wouldn't be in a tidy little package like Scorpio.

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#91 slimdogmilionar
Member since 2014 • 1343 Posts

@tormentos: Bro you do realize that the 1070 is not gonna give you 4k in all games. In order to reach 4k with a 1070 settings have to be turned down.

1800p will stress the pro more than the x1, not to mention the x is better than the pro in every way, better CPU, gpu, and more ram just to name the obvious. The 1070 is not considered a true 4k card it will hit 4k for some games, and just like a 1070 owner can choose whether they want 4k with custom settings vs 1440p @ 144hz game devs have the same options with the X.

Anyone who has a Polaris gpu knows that you gain more from overclocking memclock, I didn't even up my gpu clock just raised memclock to 2150. This has been a widely discussed issue, the card is bandwidth bound paired with the fact that Amd memory compression is nowhere near Nvidias. Seriously the 480 is 6tf and the 1070 is 6.5tf, with same size bus as the 480 how would you explain that. Why does the 390 perform better than the 480 at higher resolutions? It has a 384 bit bus compared to the 480s 256. The 480 was marketed as a 1080p card, so it really didn't need more than 256.

Just saying, if I felt the 1070 was ahead of the 1x it would be my next gpu, instead I'm looking towards the 1080 which can do 4k and is cheaper than the x.

Avatar image for whalefish82
whalefish82

511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#92  Edited By whalefish82
Member since 2013 • 511 Posts

@gamecubepad said:
@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

The answer is overwhelming...YES. Specs are worth more than $500. You'd be spending $800-900 for a comparable PC build and it still wouldn't be in a tidy little package like Scorpio.

You can do a hell of a lot more with the PC than just playing games and watching stuff. It's certainly a good price for a good quality gaming system though.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#93 gamecubepad
Member since 2003 • 7214 Posts

@whalefish82:

While I won't disagree, that's a software-based argument and the question was "are the specs worth $500?".

A GTX 1060/RX 580 mini-itx build is like $800-900. Look at the size of the this thing...

Smaller than Xbox One S, 4.5x as powerful
Smaller than Xbox One S, 4.5x as powerful

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#94  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@slimdogmilionar said:

@scatteh316: Compared to rx480 it is, I have a 480 and you can't even hit 1440p without turning down settings, mind you that's a 6tflop card paired with an i5. But you don't understand what he's saying about memory bandwidth, this is why he keeps referencing Nvidia because the 1070 performs better than the 480 and 580 with the same bandwidth, same with the 1080 it only has a 256 bit bus, but can do 4K. Nvivdias memory compression is insane, and for months he's been telling you guys that MS has stole pages from Nvidia and Amd in order to fix the bandwidth problems AMD gpu have. Not to mention that the higher the resolution goes the less CPU has to do with performance. Look at the 1060 vs 480 for reference, the 1060 is able to match and sometimes beat the 480 even though it only has 192 bit vs 480 256.

Problem is are more than just adding more bandwidth for memory to solve performance once you get to 1440p or higher. Polaris based AMD gpu's have Delta Color compression the same ability as Nvidia's Maxwell or Pascal gpus. The ratio for the compression isnt as advanced as Nvidia's Pascal architecture though. However Scorpio and PS4 have the same DCC feature as Polaris. which can save upto 35% memory bandwidth vs using uncompressed. Pascal's DCC is about 20% better than Maxwell's so your looking at 40% bandwidth saving without the help from GDDR5x in which case would be 70% savings.

Bandwidth and its 256bit bus is not the whole picture why RX 480 or RX 580 cant performance well enough at 1440p or higher with full settings. Its the fact that the gpu's themselves being only being able to perform 40 G/Pixel/s (RX 480) and 42.9 Gpixel/s (RX 580). Test done on a RX 580 at 1440p, once overclocked to 1500mhz from 1290mhz, and its memory to an effective 9000mhz vs 8000mhz (stock). Only yielded a 7% increase in performance over the default clocked RX 580. Mind you that with the memory overclock made bandwidth go to 288gb/s.

The reason why GTX 1060 can perform better than RX 480 is because of its 72.3 GPixel/s rate not solely its memory compression, its texture rate is only 120 G/Texel/s while RX 480 is 182 G/Texel/s. while GTX 1070 96 G/Pixel/s and 180 Texel/s. GTX 1070 can perform upto 40% better than a RX 480 at 4k.

Once we know the X1X gpu's performance numbers we can see where it sits between RX480 and GTX 1070.

Avatar image for dino7c
dino7c

533

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#95 dino7c
Member since 2005 • 533 Posts

If games are properly optimized for it, it will be pretty impressive

Avatar image for BlackShirt20
BlackShirt20

2631

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#96 BlackShirt20
Member since 2005 • 2631 Posts

@Orchid87: lol the build they used in the article admits it won't run games at 1440p or 4K. It does however run games at 1080 60FPS. Lol Xbox One X already does True 4K ultra setting at rock solid 60FPS.

Major fail

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#97 slimdogmilionar
Member since 2014 • 1343 Posts

@04dcarraher: Yea I agree the g series cards are faster than what Amd has to offer, but I didn't want to get into all that because at the end of the day all those operations will be faster on the 1x by default, because it's already performing better than the 480.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#98  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@slimdogmilionar said:

@04dcarraher: Yea I agree the g series cards are faster than what Amd has to offer, but I didn't want to get into all that because at the end of the day all those operations will be faster on the 1x by default, because it's already performing better than the 480.

At least I can explain things without posting a million charts, cryptic words and partial quotes lol

Avatar image for clone01
clone01

29826

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#99 clone01
Member since 2003 • 29826 Posts

@Orchid87 said:

@metalslimenite: but wasn't everyone expecting a $400 price tag?

A lot of people were, I think. I was hoping but I sure didn't hold my breath. On topic, I thought the XB1 X was a pretty slick reveal. I probably won't get one at launch, but maybe down the road.

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#100  Edited By slimdogmilionar
Member since 2014 • 1343 Posts

@04dcarraher: lol

Don't lie you know you want that Ron life.