Here's how Microsoft's $500 Xbox One X compares to a PC

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#101  Edited By Wasdie  Moderator
Member since 2003 • 53622 Posts

@tdkmillsy: You can't have "mid range PC action" without Steam, GoG, Amazon games, Origin, Nexus Mods, and all of the other stuff that actually makes PC gaming great. Power is just one slice of the pie.

I'm not going to argue that you can't really build a PC as powerful and featured as the Xbox One X for $500, for at least a few months. What I'm saying is it's idiotic to forgo a PC just because you've convinced yourself you're getting the "mid range PC action" from a $500 machine (with $60 a year online subscription) that is currently a bit more bang-for-buck performance wise than comparable PCs.

That's just so idiotic. Think about how stupid that is.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#102  Edited By gamecubepad
Member since 2003 • 7214 Posts

@04dcarraher:

In Ron's defense he has posted the Vega pixel engine to l2-cache direct access and Nvidia Maxwell/Pascal Tile Caching charts and talked about proportionate memory bandwidth to FLOPS ratio a thousand times.

If there is any secret sauce in the Scorpio, this is it. Pixels and geometry stay in l2 instead being passed through memory controller. Otherwise it won't have much over an overclocked RX 480, regardless of memory bandwidth.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#103  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@gamecubepad:

Problem is the X1X gpu's is still vastly Polaris based , and all MS really said about it was that they quadrupled the GPU L2 cache size and other odds and ends for 4k to give 2.7x the fill rate vs X1. He is assuming that they tied L2 cache to the Pixel engine like Vega . there is no direct quote or proof that they did. The spec sheet when dev kit released states Polaris features on both the console and dev kit. "some performance optimizations" from Vega can mean anything from code/driver handing to hardware changes.

Like I said we need to find out what in it actually and its pixel and texture fillrate to determine where it sits.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#104 gamecubepad
Member since 2003 • 7214 Posts

@04dcarraher:

You are correct in saying he's assuming the memory transfer bottleneck reductions and Vega enhancements MS was talking about referred to direct pixel engine to l2-cache access.

When you boil it down MS said it's like a GTX 980(albeit more vram), and that's probably accurate.

Avatar image for getslaidalot
GetsLaidAlot

121

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#105 GetsLaidAlot
Member since 2017 • 121 Posts

@MonsieurX said:
@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

can't compare

why?

Avatar image for whalefish82
whalefish82

511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#106 whalefish82
Member since 2013 • 511 Posts

@gamecubepad said:

@whalefish82:

While I won't disagree, that's a software-based argument and the question was "are the specs worth $500?".

A GTX 1060/RX 580 mini-itx build is like $800-900. Look at the size of the this thing...

I agree, hence why I said it's a great value games/media player system.

Avatar image for MonsieurX
MonsieurX

39858

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#107 MonsieurX
Member since 2008 • 39858 Posts

@getslaidalot said:
@MonsieurX said:
@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

can't compare

why?

closed vs open ecosystem

Avatar image for whalefish82
whalefish82

511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#108 whalefish82
Member since 2013 • 511 Posts

I still have to question the CPU choice in these consoles. I mean, I upgraded my i5 nearly two years ago and that was quite a lot more powerful than the One X Jaguar chip. Frame rates are really being held back on consoles right now as a result, and AI advancement is stalling all round.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#109 gamecubepad
Member since 2003 • 7214 Posts

@whalefish82:

Ok no biggie. We're on the same page.

Avatar image for Fairmonkey
Fairmonkey

2312

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#110 Fairmonkey
Member since 2011 • 2312 Posts

@metalslimenite: No one only pays $500 for an Xbox though, console gamers fail to realize they are gonna play a hell of a lot more. PC wins in value over time in addition to being the best way to play

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#111  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@whalefish82 said:

I still have to question the CPU choice in these consoles. I mean, I upgraded my i5 nearly two years ago and that was quite a lot more powerful than the One X Jaguar chip. Frame rates are really being held back on consoles right now as a result, and AI advancement is stalling all round.

Their focus of higher than 1080p resolutions targeting 4k, will actually help shift the game's priority more or less to the gpu's limits. It slows the rate that the cpu has to send data to the gpu, since it takes the gpu longer to process and render an image at 4k then it would take at say 1080p. Now at the same time its vital for devs to code for 8 threads if they dont what to see major fps dips and or 30 fps caps.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#112 scatteh316
Member since 2004 • 10273 Posts

@mariokart64fan said:

@scatteh316: first who cares how Xbox one compares to a PC it can't be compared because PC's do more than just play games they have alot going on

The Xbox one x don't have alot going on so it doesn't need to have a ridiculously large amount of ram

Etc just make new games but what do Microsoft do. Oh ya no exclusive nothing

GTFO of system wars...

Avatar image for Bread_or_Decide
Bread_or_Decide

29761

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#113 Bread_or_Decide
Member since 2007 • 29761 Posts

*system wars mode activate*

It's fitting for xbox to have the most teraflops....since they usually have the most flops to begin with.

THANK YOU GOODNIGHT

Avatar image for PurpleMan5000
PurpleMan5000

10531

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#114 PurpleMan5000
Member since 2011 • 10531 Posts

@tdkmillsy said:

I genuinely was wondering if its possible to build a PC to compete with xb1x. Ignoring the fact that games are cheaper and multiplayer is free (you get free games on live and ease of use makes up for that). Its pretty obvious you cant compete. The PC they are talking about

has less RAM

has less harddrive space

has less memory bandwidth

has no operating system

has no controllers

Misses other features in xb1x

So if your looking for mid range PC action and don't want to play mouse+keyboard games xb1x is the way to go.

The key difference imo is that the PC will be a lot cheaper to upgrade later than the Xbone X will be to replace. You could make a compelling case either way, though. I just wish Microsoft would go all in on the Xbone X and give it its own exclusives. I feel like it will be held back by every game having to have an Xbone version.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#115 Wasdie  Moderator
Member since 2003 • 53622 Posts

@whalefish82 said:

I still have to question the CPU choice in these consoles. I mean, I upgraded my i5 nearly two years ago and that was quite a lot more powerful than the One X Jaguar chip. Frame rates are really being held back on consoles right now as a result, and AI advancement is stalling all round.

Framerates aren't suffering from CPU power. Framerates for games at 4k and even 1080p60 will suffer from the GPU first, then memory bandwidth, and then the CPU. At 4k the memory bandwidth bottleneck is real.

Also please "AI advancement is stalling all round" is A) a gross misinterpretation of the state of NPC behavior in games and B) nothing to do with consoles if it is a problem. NPC behavior and decision making is far more dependent on how much time and effort a developer is willing to put into it than anything. Super fast CPUs don't mean crap. Proper AI development with our current methods (this includes the highest end games on the PC) are incredibly labor intensive. AI doesn't "think" and thus all of their various behavioral patterns must be defined by people. The more of these behaviors your program and the better you create the logical flow and state machines, the better the AI behaves. There is also the issue of animations. AI that is believable needs excellent and believable decisions making, path finding, and animation. If any of those fall the AI becomes unbelievable and silly.

Level design and how well your AI's path finding and decision making meshes with that level design is incredibly important. Yet again, another very labor intensive process.

The only thing a CPU is really going to bottleneck within the context of our video games (board games like chess are a different story), is going to be volume of NPCs that can be tracked and animated. Considering the consoles have multiple games with literally hundreds of AI dudes on screen, CPU strength relative to NPC processing needs hasn't been a problem since the 7th gen with the PS3/360.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#116 scatteh316
Member since 2004 • 10273 Posts

@PurpleMan5000 said:
@tdkmillsy said:

I genuinely was wondering if its possible to build a PC to compete with xb1x. Ignoring the fact that games are cheaper and multiplayer is free (you get free games on live and ease of use makes up for that). Its pretty obvious you cant compete. The PC they are talking about

has less RAM

has less harddrive space

has less memory bandwidth

has no operating system

has no controllers

Misses other features in xb1x

So if your looking for mid range PC action and don't want to play mouse+keyboard games xb1x is the way to go.

The key difference imo is that the PC will be a lot cheaper to upgrade later than the Xbone X will be to replace. You could make a compelling case either way, though. I just wish Microsoft would go all in on the Xbone X and give it its own exclusives. I feel like it will be held back by every game having to have an Xbone version.

It'll be held back because of the 4k resolution.

But the bolded part is why Pro will ultimately have the better looking games.

Avatar image for jereb31
Jereb31

2025

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#117  Edited By Jereb31
Member since 2015 • 2025 Posts

@KungfuKitten said:
@metalslimenite said:

For the price, you can't beat what XB1X has to offer, tech and quality wise.

Are you taking royalties into consideration? The price upfront is not all you pay for to use the console.

They forgot to factor in a tv and couch duh.

That's like $4000 right there. Not to mention the Dorito & Mt Dew subscription.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#118 tormentos
Member since 2003 • 33784 Posts

@oflow said:

yep. thats why I hate when hermits post these potato rigs as a counter to buying a console when they themselves own a $2000+ rig. Be honest and stop trying to sell shit to other people who actually are trying to get into PC gaming. A decent gaming rig is gonna cost you in the $1500 range for one with decent components and bells and whistles.

They're setting people up for a bad time with this. They also never count the cost of the OS or a decent mouse and keyboard or decent sound output. Thats another $200+ on the price. Telling someone to buy one of those shit $15 keyboards and a $15 mouse is being an ass.

PC gaming is the best because you get what you pay for.

Who needs $1500 or $2000 for a decent PC.?

What is a Decent PC in your skewed view.?

https://pcpartpicker.com/list/q83gjc

$1,245 but i didn't hold back and this ^^ is not a decent PC,this PC would beat scorpio quite easy and even has 4k blu-ray porting a Ryzen 1700 and a 1070GTX.

If some one is not even interested movies you can go even cheaper.

https://pcpartpicker.com/list/43GbHN

Ryzen 1500X 3.5ghz 4 cores 8 threads + RX580 no drive 8GB of video memory 8GB of system memory for a total of 16GB.

$830 and i can shave even more since i didn't go into the cheapest board or cheaper case or HDD.

So from where in hell you pulled $1,500 i don't know maybe you were thinking about going i7 + 1080 TI as if you need it that to have a DECENT PC.

@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

Yes it is and no one can't or should deny that,but that is on a hardware base only.

@slimdogmilionar said:

@tormentos: Bro you do realize that the 1070 is not gonna give you 4k in all games. In order to reach 4k with a 1070 settings have to be turned down.

1800p will stress the pro more than the x1, not to mention the x is better than the pro in every way, better CPU, gpu, and more ram just to name the obvious. The 1070 is not considered a true 4k card it will hit 4k for some games, and just like a 1070 owner can choose whether they want 4k with custom settings vs 1440p @ 144hz game devs have the same options with the X.

Anyone who has a Polaris gpu knows that you gain more from overclocking memclock, I didn't even up my gpu clock just raised memclock to 2150. This has been a widely discussed issue, the card is bandwidth bound paired with the fact that Amd memory compression is nowhere near Nvidias. Seriously the 480 is 6tf and the 1070 is 6.5tf, with same size bus as the 480 how would you explain that. Why does the 390 perform better than the 480 at higher resolutions? It has a 384 bit bus compared to the 480s 256. The 480 was marketed as a 1080p card, so it really didn't need more than 256.

Just saying, if I felt the 1070 was ahead of the 1x it would be my next gpu, instead I'm looking towards the 1080 which can do 4k and is cheaper than the x.

If the 1070gtx isn't a true 4k card what make you think scorpio is? It has a Polaris GPU at 6TF,the RX480 is 6TF plus OC so its the RX580 and non of the 2 beat the 1070GTX.

Yes it will stress more the PS4 Pro but that is not my point,Scorpio has limited power over the pro is not infinite,if the Pro is a 1800p because it lack power to go higher,running 4k on scorpio on that same game will consume most of the extra resources because you are using Scorpio extra power to render over 1800p exactly as it happen with the PS4 and xbox one,were the xbox one peak at 900p and the PS4 version use its extra 40% power to reach 1080p.

The problem here is simple scorpio bandwidth is shared so it doesn't have 326GB/s for GPU it has like 294GB/s going by the XBO reservation but that reservation can be higher,which mean that what Scorpio has bandwidth wise over the RX480 is little,worse the RX580 performs better than the RX480 even that both have the same bandwidth 256GB/s.

Scorpio features AMD compression not Nvidia so comparing it to the 1070gtx is irrelevant.

Avatar image for BassMan
BassMan

17865

Forum Posts

0

Wiki Points

0

Followers

Reviews: 226

User Lists: 0

#119  Edited By BassMan
Member since 2002 • 17865 Posts

@Wasdie: The higher the frame rate, the more resources are required from the CPU. With a lot of games on console, the CPU is already being pushed tracking objects, physics, collisions, AI, etc. at 30fps. You need a lot of extra CPU power to feed double the fps to the GPU. There in lies the bottleneck. The GPU can scale to higher frame rates and resolutions, but the CPU has to be able to keep up when scaling the frame rate. This is already a problem with the Pro and I don't see the X's CPU being that much more capable.

It becomes a compromise for developers. Do they narrow down the scope of the game to hit 60fps? or do they push simulation and interactivity at 30fps? If they choose heavy simulation and interactivity, we can at least scale our CPUs on PC to achieve a high frame rate, but console players do not have that luxury and have to deal with the compromise.

Avatar image for darklight4
darklight4

2094

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#120 darklight4
Member since 2009 • 2094 Posts

PC will always be better it's openess is it's biggest strength among many others. You don't have to wait to upgrade you can do it whenever you want.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#121 scatteh316
Member since 2004 • 10273 Posts

@BassMan said:

@Wasdie: The higher the frame rate, the more resources are required from the CPU. With a lot of games on console, the CPU is already being pushed tracking objects, physics, collisions, AI, etc. at 30fps. You need a lot of extra CPU power to feed double the fps to the GPU. There in lies the bottleneck. The GPU can scale to higher frame rates and resolutions, but the CPU has to be able to keep up when scaling the frame rate. This is already a problem with the Pro and I don't see the X's CPU being that much more capable.

Exactly, Scorpio only has a 9% CPU clock advantage so all things being equal Pro would have to be CPU limited at 56fps in order for that extra 9% on Scorpio to get it to the magic 60fps mark.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#122 Wasdie  Moderator
Member since 2003 • 53622 Posts

@BassMan: I understand you need more CPU power for higher framerates, but I think you're over estimating just how much. You have to remember games like FIFA and Madden, games tracking 20+ highly coordinated, highly "intelligent" AI, decision making coaches, big audiences, the sidelines, physics on the players (they are driven by momentum and physics engines today, not pure animations), dynamic animations (lots of blending), rag doll animations and physics (collisions), physics on the ball (needs to be more accurate than your typical debris physics), and even the clothing on the characters (for dat fidelity), they manage to be 60 fps games. Sports games get a bad rap on the internet but are pretty impressive technical feats if you start looking at all of their moving parts.

You're going to be bottlenecked by the fidelity of your game long before you hit a CPU limitation. Pushing a lot of draw calls, unoptimized lighting/shadows, unoptimized culling, and stuff like that is going to do far more damage than a slightly weaker CPU. Even on the PC people are always surprised at how weak of a CPU you can throw at modern games and still have them run at 60.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#123  Edited By Wasdie  Moderator
Member since 2003 • 53622 Posts

@scatteh316: That's not how any of this works. You can't just take percents and apply them to framerates that way. Way more complicated than that. Don't oversimplify things.

The CPU isn't that big of a bottleneck. The Xbox One X has a much more powerful GPU and when the direct comparisons come out you'll see it will push 4k, or at least higher resolutions, with more stable framerates than the Pro.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#124  Edited By scatteh316
Member since 2004 • 10273 Posts

@Wasdie said:

@scatteh316: That's not how any of this works. You can't just take percents and apply them to frame rates that way. Way more complicated than that. Don't oversimplify things.

The CPU isn't that big of a bottleneck. The Xbox One X has a much more powerful GPU and when the direct comparisons come out you'll see it will push 4k, or at least higher resolutions, with more stable frame rates than the Pro.

I know you can't over simplify but this is SW, the place where you have to dumb shit down so people understand.

A lot of Digital Foundry and NX videos have pointed to CPU limitations in loads of games since these consoles released.

There's also instances in some comparisons where Xbox One turns in better performance then PS4 (Driving through the streets of GTA5 for example) even with PS4's GPU advantage, comparisons put it down to Xbones slightly higher CPU clock.

So how do you explain these little instances if there's not that much of a bottleneck?

Avatar image for BassMan
BassMan

17865

Forum Posts

0

Wiki Points

0

Followers

Reviews: 226

User Lists: 0

#126 BassMan
Member since 2002 • 17865 Posts

@Wasdie: Most games are GPU bound, but a lot of the open world and sandbox games are CPU bound as well. The CPU is already holding back games like GTA V, Skyrim, The Witcher 3, etc. from hitting 60fps on Pro. I see this problem continuing with X.

Avatar image for scatteh316
scatteh316

10273

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#127  Edited By scatteh316
Member since 2004 • 10273 Posts

@xboxiphoneps3 said:

@scatteh316: 326 GB/s + DCC= effectively nearly 350 GB/s. 8 core Jaguar in the X1X doesn't use more than 25 GB/s of bandwidth in a single CPU cycle, plus including some other components (10 GB/s~) give or take.

That still leaves X1X GPU with 300 GB/s bandwidth + solely for the GPU alone

Rona is actually pretty much exactly on point and right

No he's not.... The PC GPU's also have DDC so if you're going to factor that in with Scorpio then you need to factor that in for the other GPU's....... and the PC GPU's don't have the CPU sucking bandwidth off them so they have higher numbers then he quoted.

He's figures are all wrong.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#128  Edited By Wasdie  Moderator
Member since 2003 • 53622 Posts

@BassMan: Those games aren't as CPU limited as you may think. Open world games have the GPU problem of large fluctuations with the polygon count with potentials of massive amounts of particle effects to be called up in an unpredictable and uncontrollable fashion. A linear game has much tighter control over the average polygon count and amount of fidelity and complexity on screen. Generally you don't want an unlocked framerate that goes from 60 to 30. That feels awful. So you have to think of the worst case scenarios and basically plan around that. 30 is a safer bet for any open world game.

A lot of GPU bottlenecks are often blamed on CPU because of old myths. A lot of networking bottlenecks are also blamed on the CPU. Even a powerful i7 OC'ed to 4.5 ghz can't do shit if its main game loop is on hold waiting for the packets to all come in. That's actually why having the 8 cores, rather than the super high speeds, is to a console's advantage. Though proper multithreading is more of an art than science and it's taken a good decade for engine's to truly start utilizing multiple cores. It still not their partly because it's difficult, but also because rewriting those low level engine functions would massively delay most projects, and that's not acceptable in this industry.

I'm not saying there is no CPU limitation to the Xbox One X or PS4 Pro, I just think that people greatly underestimate just how much raw GPU power is needed for 1080p60 with high fidelity or even 4k30 with high fidelity. 4k60? lol Even a GTX 1080 ti struggles to maintain that, and that card is $700. The PS4 and Xbox One's GPUs are incredibly weak and the PS4 Pro's and now Xbox One X's aren't nearly as powerful as Sony and Microsoft want you to believe.

Outside of a few exceptions, the GPU is going to be the prime limitation of these consoles moving forward.

Avatar image for jereb31
Jereb31

2025

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#129  Edited By Jereb31
Member since 2015 • 2025 Posts

@Wasdie: It's kind of like saying "I can get all the action of a ferrari going 100km/hr in my busted ford laser for only 1/100th the price!!".

I mean, yeah they both go 100km/hr but the ford is just terrible.

The ford is the console for those who don't get nuance.

Avatar image for PinchySkree
PinchySkree

1342

Forum Posts

0

Wiki Points

0

Followers

Reviews: 16

User Lists: 0

#130 PinchySkree
Member since 2012 • 1342 Posts

@gamecubepad said:
@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

The answer is overwhelming...YES. Specs are worth more than $500. You'd be spending $800-900 for a comparable PC build and it still wouldn't be in a tidy little package like Scorpio.

The price would actually matter if it wasn't a closed system with none of the benefits of PC.

They farm the cost back using high game prices and xbox live.

Avatar image for AdobeArtist
AdobeArtist

25184

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#131  Edited By AdobeArtist  Moderator
Member since 2006 • 25184 Posts

@Wasdie said:

@BassMan: Those games aren't as CPU limited as you may think. Open world games have the GPU problem of large fluctuations with the polygon count with potentials of massive amounts of particle effects to be called up in an unpredictable and uncontrollable fashion. A linear game has much tighter control over the average polygon count and amount of fidelity and complexity on screen.

OK, but when you say mmo games aren't as CPU limited as you think, you do mean to say that while it's still more GPU related, the CPU does still have a factor in things, right?

My PC was formerly running on an FX 8350 and even going from an R9 380 to GTX 1060 (6 GB) still I'd get frame rate dips to the 30's in a game like Tera and hardly able to go above 45 fps in the averages. And this game isn't exactly graphically demanding. In The Division which is built on a persistent world engine, while I could get averages in the 70s, would still get fairly frequent dips into the mid 30's in more intense engagements.

But then I got the Ryzen 1500X and what an improvement. The Division not getting much higher in the averages but the minimums being more in the mid 50's and happening less frequently, where the frame rate fluctuations are significantly less noticeable for an overall smoother experience. And in the mmo space, I can get averages of 60-65 with the occassional dip in the high 40's in the heavily populated player hubs.

Despite the FX 8350 having 8 cores and a base speed of 4 GHz, I became aware of the low IPC rating it had to the i5 counterparts, so there clearly was a bottleneck coming from the CPU. And I mean more towards mmo persistent world designs, RTS, and even games like GTA V that are reported to be CPU intense.

Avatar image for gamecubepad
gamecubepad

7214

Forum Posts

0

Wiki Points

0

Followers

Reviews: -12

User Lists: 0

#132 gamecubepad
Member since 2003 • 7214 Posts

@PinchySkree:

Until we see the teardown analysis and have a BoM there is no, "they farm it back". They probably aren't even loss-leading.

This is a game console designed around gamepad use, not a word-processing spreadsheet maker with 5 different storefronts/user id's to navigate. Also, such great deals to be had here at reputable outlets such as CDKeys and G2A. >_>

Avatar image for slimdogmilionar
slimdogmilionar

1343

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 5

#134  Edited By slimdogmilionar
Member since 2014 • 1343 Posts

@tormentos: lol I have no intentions of buying a 1x, I never bought into the whole true 4k thing and I refuse to pay $500 for a system again... I think. But the point I'm trying to make is that if you have a console that can handle 4k at certain settings and a console that can do the same, how can you say the console is not in the same ballpark without concrete proof to prove otherwise.

Avatar image for kemar7856
kemar7856

11783

Forum Posts

0

Wiki Points

0

Followers

Reviews: 13

User Lists: 0

#135  Edited By kemar7856
Member since 2004 • 11783 Posts

everyone knows pc gaming is dead

Avatar image for lundy86_4
lundy86_4

61524

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#137 lundy86_4
Member since 2003 • 61524 Posts

It's basically loss-leading price versus a full-fledged PC. It's just asinine to try and draw comparisons.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#138  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@ronvalencia said:

The article from PCgamer didn't factor in RX-580's memory bottleneck situation. RX-580 is not the first Polaris 10* with 6 TFLOPS i.e. MSI RX-480 GX has 6.06 TFLOPS.

Read http://gamingbolt.com/ps4-pro-bandwidth-is-potential-bottleneck-for-4k-but-a-thought-through-tradeoff-little-nightmares-dev

PS4 Pro's has 4.2 TFLOPS with 218 GB/s hence the ratio is 51.9 GB/s per 1 TFLOPS and it's already memory bottlenecked.

RX-580 has 6.17 TFLOPS with 256 GB/s hence the ratio is 41.49 GB/s per 1 TFLOPS and it's memory bottleneck is worst than PS4 Pro.

X1X has 6 TFLOPS with 326 GB/s hence the ratio is 54.3 GB/s per 1 TFLOPS.

RX-480 (5.83 TFLOPS)'s 264 GB/s effective bandwidth, hence very close to R9-290 (4.8 TFLOPS) and R9-290X (5.6 TFLOPS) results.

I welcome for PCgamer's Tuan Nguyen to debate me.

PS; R9-390X's effective memory bandwidth is about 315.6 GB/s and has no DCC (Delta Color Compression) features.

Check list

1. I have game programmer's point of view on memory bottlenecks with PS4 Pro.

2. I have both AMD and NVIDIA PC benchmarks showing memory bottlenecks for Polaris 10 XT. I have cached additional graphs showing similar patterns e.g. F1 2016. Forza Horizon 3 and 'etc'. I'm ready for General Order 24/Base Delta Zero.

My god you don't fu**ing get tire of getting owned.?

Ryzen,Vega,fp16, $399 based on Soc size,you just don't get tire of been wrong.

What RX580 bandwidth bottleneck link me to it please and stop inventing shit,link me as the RX580 has the same bandwidth as the RX480 yet perform better.

Why do you bring the Pro and link the Pro on this thread you sure don't learn your lesson,you always do this shit and end up owned,quote a developer with god know what intentions and claims it to be gods word,you did the same with DF which you quote when it serve you best,but when they claim Scorpio would have a CPU bottleneck again you freak and started to attack it...

Did you take into account that 1TF used for FP16 process would yield 2TF of performance while using the same exact bandwidth? Did you? Oh no i am sure you didn't because you are a blind fanboy that put pitfalls on the PS4 hardware but over hype shitty cosmetic crap on xbox hardware you have almost 4 years doing that shit.

Jit Compression.

Tile Resources.

Data move engines

Audio Block

ESRAM

150mhz faster CPU

DX12

The cloud

Basically you have try to hype every single crap on xbox one and all served for nothing,and you are doing the same here.

The RX580 has 345GB/s effective bandwidth factoring in DCC the same as the RX480 which you love to pretend that they don't have DCC and that some how is just 256,so it has 345GB/s effective to it self that is more than the 315GB's you falsely claim for the R390X yet the R390X beat it by a frame in some games in 4k,you know that there are instances were more CU is actually more beneficial than having less CU with higher clock?

You don't even fu**ing know for fact what is the CPU reservation on Scorpio DF hasn't asked which i find odd also,but here you are parading as if Scorpio had all bandwidth to it self.

You don't have the standing to challenge any one to a debate,he clearly saw what we all saw a CPU bottleneck and a shared memory bandwidth that on PC doesn't exist.

And considering that you ass has been handed to you by me this year more times that i can remember even more,you simply and blindly assume crap based on nothing but your biased opinion,for you MS and AMD are gods gift to earth.

You are wrong and you omit things on purpose when it doesn't serve your best,i told you Scorpio would be $500 but you did not listen.

@ronvalencia said:

From https://www.techpowerup.com/forums/threads/article-just-how-important-is-gpu-memory-bandwidth.209053/

GTX 970's (FarCry 4 @ 1440p Very High) memory bandwidth usage. GTX 970 has 224 GB/s physical memory bandwidth.

Note why Microsoft aimed for more than 300 GB/s for 4K.

PS4 Pro's 218 GB/s is fine for 1080p and 1440p.

3x Owned now noob.

Please stop the PS4 has 4k native games with 218GB/s effective bandwidth of the PS4 Pro is 218GB/s + 35% from DDC which = 294GB/s that is without taking into account that the PS4 Pro FP16 feature can pass 2X performance over the same bandwidth.

So needing 300GB/s is a joke,and again you are assuming one what MS claims without actual clarification they say more than 300GB/s they were talking about the complete system not just GPU it is impossible that MS give just 26GB/s to CPU when the XBO has 30GB/s with 1.7ghz,but since you are a blind fanboy you don't look into that.

Scorpio has 326Gb/s SHARED and you don't know how much is reserve for system and CPU,certainly isn't zero like you once try to claim lemming,the RX580 has 256GB/s for GPU + more than 50GB/s + from DDR4.

256Gb/s + 50GB's hell look at freaking skylake 60+GB/s read and almost 60GB/s writes as well,why in hell you think Ryzen benefit so much from DDR4 over DDR3.?

So you want to argue that shitty Jaguar has ryzen like performance but some how without using bandwidth in the same way Ryzen use it.?

I predict minimum 30GB/s like the XBO but it could be more.

@scatteh316 said:

Can you please adjust your bandwidth per Tflop figures for Xbox-X so they show a true reflection of available bandwdith available to the GPU once CPU bandwidth allocation has been factored in.

Oh he simply will play blind i have more than year arguing with him about his totally bullshit bandwidth claims of Scorpio,he even try to argue that CPU would us Zero bandwidth...hhahahahaa

Because sony used a chart showing from zero to more than 30Gb/s he some how think that it would use zero bandwidth..

The guy is completely delusional and sold on MS PR bullshit,in fact he even invent crap based on nothing but a miss quote,he assumes Scorpio had a Ryzen CPU,a Vega GPU and FP16 only because Phil Spencer claim they wanted to do something different to the pro which ended been a lie,since they chose Polaris like the Pro and Jaguar like the Pro on a 16NM process both,they just uses more ram but it is clear that Rondementia loves to assume crap and what he doesn't assumes he invent it,like he did with the Price of scorpio which i claim $500 and many other and he claim the Soc for both had only 12% difference on size so that mean it would be $400,he downplayed 4k blu-ray,extra ram,better cooling and extra CU as well,for him everything on Scorpio was as expensive as the Pro..lol

128 bit DDR4-1333 = 20 GB/s

Anything beyond 60 hz is pointless for game consoles. Furthermore desktop PC with discrete GPU doesn't have fusion links and zero copy features to reduce CPU's external memory hit rates.

Note why PS4 CPU's 10 GB/s is enough for Battlefield 60 hz. PS4's CPU 1.6 Ghz low budget impacts multi-platform game development.

Your memory benchmark deals with burst mode simple memory transfer and doesn't reflect proper ALU/branch operation vs memory bandwidth benchmark e.g. a computation data set that fit within L1 and L2 cache reduce memory reduces external memory hit rates.

You haven't experienced the performance degradation with Swift Shader's respect L2 cache boundary setting vs L2 cache over spill setting and that's running on Intel i7s CPUs.

It's better to tile computation data sets based on L1 and L2 cache sizes and preform external memory transfer after the tile computation is completed.

Intel Core's external memory clock cycle latency is almost half of PS4 Jaguar's 200 cycles.

X1X GPU is not bound by RX-580's 256 bit GDDR5-8000 limits.

For game consoles, respecting L2 cache boundary is important since external memory access is costly.

Try again.

----------------------------------

Jit Compression**. //memory storage and memory transfer related, Nothing to do with shader bottlenecks. Your argument is a red herring.

Tile Resources**. // keep TMUs at fast memory pool. Nothing to do with shader bottlenecks. Your argument is a red herring.

Data move engines** // For ESRAM, Nothing to do with shader bottlenecks. Your argument is a red herring.

Audio Block // Your argument is a red herring. Nothing to do with shader bottlenecks. Your argument is a red herring.

ESRAM** // DDR3 workaround, Nothing to do with shader bottlenecks. Your argument is a red herring.

150mhz faster CPU // Hitman crowd scene superiority. Nothing to do with shader bottlenecks. Your argument is a red herring.

DX12 // Reduce CPU workload. Nothing to do with shader bottlenecks. Your argument is a red herring.

The cloud // Remote physics compute device. Nothing to do with shader bottlenecks. Your argument is a red herring.

**All workaround measures to fake W5000's 256 bit GDDR5 memory storage and memory bandwidth. MS should cut the chase and select GDDR5 instead i.e. keep it simple.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#139  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:
@slimdogmilionar said:

@scatteh316: Compared to rx480 it is, I have a 480 and you can't even hit 1440p without turning down settings, mind you that's a 6tflop card paired with an i5. But you don't understand what he's saying about memory bandwidth, this is why he keeps referencing Nvidia because the 1070 performs better than the 480 and 580 with the same bandwidth, same with the 1080 it only has a 256 bit bus, but can do 4K. Nvivdias memory compression is insane, and for months he's been telling you guys that MS has stole pages from Nvidia and Amd in order to fix the bandwidth problems AMD gpu have. Not to mention that the higher the resolution goes the less CPU has to do with performance. Look at the 1060 vs 480 for reference, the 1060 is able to match and sometimes beat the 480 even though it only has 192 bit vs 480 256.

Problem is are more than just adding more bandwidth for memory to solve performance once you get to 1440p or higher. Polaris based AMD gpu's have Delta Color compression the same ability as Nvidia's Maxwell or Pascal gpus. The ratio for the compression isnt as advanced as Nvidia's Pascal architecture though. However Scorpio and PS4 have the same DCC feature as Polaris. which can save upto 35% memory bandwidth vs using uncompressed. Pascal's DCC is about 20% better than Maxwell's so your looking at 40% bandwidth saving without the help from GDDR5x in which case would be 70% savings.

Bandwidth and its 256bit bus is not the whole picture why RX 480 or RX 580 cant performance well enough at 1440p or higher with full settings. Its the fact that the gpu's themselves being only being able to perform 40 G/Pixel/s (RX 480) and 42.9 Gpixel/s (RX 580). Test done on a RX 580 at 1440p, once overclocked to 1500mhz from 1290mhz, and its memory to an effective 9000mhz vs 8000mhz (stock). Only yielded a 7% increase in performance over the default clocked RX 580. Mind you that with the memory overclock made bandwidth go to 288gb/s.

The reason why GTX 1060 can perform better than RX 480 is because of its 72.3 GPixel/s rate not solely its memory compression, its texture rate is only 120 G/Texel/s while RX 480 is 182 G/Texel/s. while GTX 1070 96 G/Pixel/s and 180 Texel/s. GTX 1070 can perform upto 40% better than a RX 480 at 4k.

Once we know the X1X gpu's performance numbers we can see where it sits between RX480 and GTX 1070.

1. http://www.3dmark.com/3dmv/5521740

2. http://www.3dmark.com/3dmv/5563081

Color fill score for GTX 1060 is 24.6 (Ref 1) which is less than RX-480's 27.2 (Ref 2)score.

Your argument on raw color fill is debunked.

The real problem is shown by the following diagrams.

The real 3D game workloads has iterative texture read/write to ROP real/write loop workloads.

Both AMD and NVIDIA are in agreement i.e. Pixel Engine must be connected to L2 cache.

---------------

Your 1440p example is flawed. At higher resolution such as 4K, higher memory bandwidth different is larger.

The gap between RX-480 and R9-390X is 7 percent.

The gap between RX-480 and R9-390X is 13 percent. The gap would be wider when R9-390X gains DCC and lower graphics pipeline bottlenecks (lower latency).

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#140  Edited By ronvalencia
Member since 2008 • 29612 Posts

@scatteh316 said:
@xboxiphoneps3 said:

@scatteh316: 326 GB/s + DCC= effectively nearly 350 GB/s. 8 core Jaguar in the X1X doesn't use more than 25 GB/s of bandwidth in a single CPU cycle, plus including some other components (10 GB/s~) give or take.

That still leaves X1X GPU with 300 GB/s bandwidth + solely for the GPU alone

Rona is actually pretty much exactly on point and right

No he's not.... The PC GPU's also have DDC so if you're going to factor that in with Scorpio then you need to factor that in for the other GPU's....... and the PC GPU's don't have the CPU sucking bandwidth off them so they have higher numbers then he quoted.

He's figures are all wrong.

PC dGPU has CPU-->PCI-E version 3.0 16X (16 GB/s per direction) --> GPU memory copy events. PC's lack of "zero copy" features sucks.

Avatar image for Jebus213
Jebus213

10056

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#141 Jebus213
Member since 2010 • 10056 Posts

@howmakewood said:

You asked for charts? Well they are about to arrive.

kek

Avatar image for Jebus213
Jebus213

10056

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#142 Jebus213
Member since 2010 • 10056 Posts

MOAR CHARTS AND GRAPHS!!! MOOOAAAR CHARTS AN GRAPHS!!!!!!!!!!!!!!!

Avatar image for GarGx1
GarGx1

10934

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#143 GarGx1
Member since 2011 • 10934 Posts

@oflow said:
@howmakewood said:

@tdkmillsy: strictly performance wise these budget PC's are nothing but trash, if you actually want performance you better lay in the dough, but at least they have ssd in the budget build... otherwise it's looking to maybe do 60fps on most(not all) new games on 1080p because the gpu is trash on anything higher

yep. thats why I hate when hermits post these potato rigs as a counter to buying a console when they themselves own a $2000+ rig. Be honest and stop trying to sell shit to other people who actually are trying to get into PC gaming. A decent gaming rig is gonna cost you in the $1500 range for one with decent components and bells and whistles.

They're setting people up for a bad time with this. They also never count the cost of the OS or a decent mouse and keyboard or decent sound output. Thats another $200+ on the price. Telling someone to buy one of those shit $15 keyboards and a $15 mouse is being an ass.

PC gaming is the best because you get what you pay for.

I, for one, have never and will never advise anyone to buy or build a cheap assed gaming rig. As you said, you get what you pay for and do a hell of more with it.

Anyone with a budget for a potato with wires hanging out should get a console.

Avatar image for whalefish82
whalefish82

511

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#144  Edited By whalefish82
Member since 2013 • 511 Posts

@Wasdie said:
@whalefish82 said:

I still have to question the CPU choice in these consoles. I mean, I upgraded my i5 nearly two years ago and that was quite a lot more powerful than the One X Jaguar chip. Frame rates are really being held back on consoles right now as a result, and AI advancement is stalling all round.

Framerates aren't suffering from CPU power. Framerates for games at 4k and even 1080p60 will suffer from the GPU first, then memory bandwidth, and then the CPU. At 4k the memory bandwidth bottleneck is real.

Also please "AI advancement is stalling all round" is A) a gross misinterpretation of the state of NPC behavior in games and B) nothing to do with consoles if it is a problem. NPC behavior and decision making is far more dependent on how much time and effort a developer is willing to put into it than anything. Super fast CPUs don't mean crap. Proper AI development with our current methods (this includes the highest end games on the PC) are incredibly labor intensive. AI doesn't "think" and thus all of their various behavioral patterns must be defined by people. The more of these behaviors your program and the better you create the logical flow and state machines, the better the AI behaves. There is also the issue of animations. AI that is believable needs excellent and believable decisions making, path finding, and animation. If any of those fall the AI becomes unbelievable and silly.

Level design and how well your AI's path finding and decision making meshes with that level design is incredibly important. Yet again, another very labor intensive process.

The only thing a CPU is really going to bottleneck within the context of our video games (board games like chess are a different story), is going to be volume of NPCs that can be tracked and animated. Considering the consoles have multiple games with literally hundreds of AI dudes on screen, CPU strength relative to NPC processing needs hasn't been a problem since the 7th gen with the PS3/360.

Wow, you really schooled me on a subject I clearly didn't understand from a technical standpoint, and I genuinely appreciate that. I still think AI is seriously lagging behind compared to the evolution of graphics, particularly in non sports games.

I'd like to ask you a question regarding your points. I have a stock i7 6700k, 16gb DDR4 RAM, MSI Gaming GTX 1080 rig, but it can't run a game like Witcher 3 with ultra settings, in 4k and 60FPS. So, if I wanted to hit 60FPS, having more than 8GB of Vram would be the first thing to improve, right?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#145 ronvalencia
Member since 2008 • 29612 Posts

@lundy86_4 said:

It's basically loss-leading price versus a full-fledged PC. It's just asinine to try and draw comparisons.

Phil Spencer has claimed X1X's $499 is priced "at cost" i.e. they are not making a lost for each unit.

Avatar image for lundy86_4
lundy86_4

61524

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#146 lundy86_4
Member since 2003 • 61524 Posts

@ronvalencia said:

Phil Spencer has claimed X1X's $499 is priced "at cost" i.e. they are not making a lost for each unit.

Link? Anything I see shows him skirting the question and simply stating that they make no money on hardware.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#147  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:
@oflow said:

yep. thats why I hate when hermits post these potato rigs as a counter to buying a console when they themselves own a $2000+ rig. Be honest and stop trying to sell shit to other people who actually are trying to get into PC gaming. A decent gaming rig is gonna cost you in the $1500 range for one with decent components and bells and whistles.

They're setting people up for a bad time with this. They also never count the cost of the OS or a decent mouse and keyboard or decent sound output. Thats another $200+ on the price. Telling someone to buy one of those shit $15 keyboards and a $15 mouse is being an ass.

PC gaming is the best because you get what you pay for.

Who needs $1500 or $2000 for a decent PC.?

What is a Decent PC in your skewed view.?

https://pcpartpicker.com/list/q83gjc

$1,245 but i didn't hold back and this ^^ is not a decent PC,this PC would beat scorpio quite easy and even has 4k blu-ray porting a Ryzen 1700 and a 1070GTX.

If some one is not even interested movies you can go even cheaper.

https://pcpartpicker.com/list/43GbHN

Ryzen 1500X 3.5ghz 4 cores 8 threads + RX580 no drive 8GB of video memory 8GB of system memory for a total of 16GB.

$830 and i can shave even more since i didn't go into the cheapest board or cheaper case or HDD.

So from where in hell you pulled $1,500 i don't know maybe you were thinking about going i7 + 1080 TI as if you need it that to have a DECENT PC.

@getslaidalot said:

I'm not a technical wizard, can someone tell me if the x1x is actually worth the $500 in specs alone?

Yes it is and no one can't or should deny that,but that is on a hardware base only.

@slimdogmilionar said:

@tormentos: Bro you do realize that the 1070 is not gonna give you 4k in all games. In order to reach 4k with a 1070 settings have to be turned down.

1800p will stress the pro more than the x1, not to mention the x is better than the pro in every way, better CPU, gpu, and more ram just to name the obvious. The 1070 is not considered a true 4k card it will hit 4k for some games, and just like a 1070 owner can choose whether they want 4k with custom settings vs 1440p @ 144hz game devs have the same options with the X.

Anyone who has a Polaris gpu knows that you gain more from overclocking memclock, I didn't even up my gpu clock just raised memclock to 2150. This has been a widely discussed issue, the card is bandwidth bound paired with the fact that Amd memory compression is nowhere near Nvidias. Seriously the 480 is 6tf and the 1070 is 6.5tf, with same size bus as the 480 how would you explain that. Why does the 390 perform better than the 480 at higher resolutions? It has a 384 bit bus compared to the 480s 256. The 480 was marketed as a 1080p card, so it really didn't need more than 256.

Just saying, if I felt the 1070 was ahead of the 1x it would be my next gpu, instead I'm looking towards the 1080 which can do 4k and is cheaper than the x.

If the 1070gtx isn't a true 4k card what make you think scorpio is? It has a Polaris GPU at 6TF,the RX480 is 6TF plus OC so its the RX580 and non of the 2 beat the 1070GTX.

Yes it will stress more the PS4 Pro but that is not my point,Scorpio has limited power over the pro is not infinite,if the Pro is a 1800p because it lack power to go higher,running 4k on scorpio on that same game will consume most of the extra resources because you are using Scorpio extra power to render over 1800p exactly as it happen with the PS4 and xbox one,were the xbox one peak at 900p and the PS4 version use its extra 40% power to reach 1080p.

The problem here is simple scorpio bandwidth is shared so it doesn't have 326GB/s for GPU it has like 294GB/s going by the XBO reservation but that reservation can be higher,which mean that what Scorpio has bandwidth wise over the RX480 is little,worse the RX580 performs better than the RX480 even that both have the same bandwidth 256GB/s.

Scorpio features AMD compression not Nvidia so comparing it to the 1070gtx is irrelevant.

Your XBO 30 GB/s reservation is a misunderstand of cache coherency.

http://www.mcvuk.com/news/read/microsoft-address-xbox-one-vs-ps4-performance-mis-information/0120795

Quoting Microsoft (first party source).

We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect

Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

30 GB/s coherency refers to CPU-to-GPU link.

The reason for coherent bandwidth increase is for Kinect. Kinect function is reduced and the need for this feature level is reduced.

Coherency refers to hardware function to keep both CPU's and GPU's cache pages in sync with data changes. This is only relevant when both GPU and CPU are accessing the same memory page.

This is the bandwidth rate that snoop request function can update. Modern X86 systems only updates data when there's a snoop request from other end point processors and both processor end points are looking at the same memory page.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#148  Edited By ronvalencia
Member since 2008 • 29612 Posts

@lundy86_4 said:
@ronvalencia said:

Phil Spencer has claimed X1X's $499 is priced "at cost" i.e. they are not making a lost for each unit.

Link? Anything I see shows him skirting the question and simply stating that they make no money on hardware.

http://www.businessinsider.com/xbox-one-x-hands-on-photos-video-2017-6?r=US&IR=T

Phil Spencer told me in an interview. He was quick to point out that it's not losing Microsoft money, but that the business model is similar to a razor and razor blades — sell the box at cost, make money on the games/accessories/etc.

Avatar image for Wasdie
Wasdie

53622

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#149  Edited By Wasdie  Moderator
Member since 2003 • 53622 Posts

@AdobeArtist: The problem with MMOs is that the CPU spends a ton of time waiting for network operations to complete. This gives the perception of poor CPU performance, but in reality the problem is the CPU is sitting there doing nothing for large chunks of time when you're in more populated areas.

Though some MMOs like Planetside 2 also have a lot of stuff going on that requires a lot of draw calls and whatnot from the CPU as well. The massive player counts make for very unpredictable scenarios.

Avatar image for Alucard_Prime
Alucard_Prime

10107

Forum Posts

0

Wiki Points

0

Followers

Reviews: 23

User Lists: 0

#150 Alucard_Prime
Member since 2008 • 10107 Posts

In other words the XOneX is great value for your money.....noted.

Good price for Canadians too at 599$ if you consider the Xchange rate