DX12 Will not change X1's graphics capabilities

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#201  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@btk2k2 said:

That graph is showing latency as a function of clock cycles. Yes in that scenario the cycle latency for GDDR5 is higher the DDR3 but because GDDR5 has more clock cycles the latency as a function of time is similar for both as the Hynix data sheets have shown.

Now I am looking at some latency benchmarks for Quad vs Dual channel memory on the X79 motherboard and there is no difference in latency as you can see here (picture on left). This is comparing 2x 64bit buses to 4x 64bit buses and is showing no difference in latency at all. I see no reason to assume that 8x 32bit buses would incur a latency penalty when going from 2 to 4 64bit buses does not.

It is blatantly obvious though that when L1 cache has lower latency than L2 cache which has lower latency than L3 cache which has lower latency than system memory. Why do you think server CPUs have such large L3 cache sizes compared to the desktop counterparts, it is to avoid trips to main memory. These CPUs are the same spec so if one has to make a trip to main memory, so does the other and the overall trip time is practically the same for both systems.

This PS4 has higher memory latency myth has to stop, it is bogus and I have debunked it many times now and it is just getting tiresome.

the Hynix data sheets show only the modules not the controllers as far as Ive seen.... Also are you even looking at charts correctly that you linked? Your comparing quad channel vs dual channel and quad is solely for bandwidth needs. And if you look at the dual vs quad results with Sandra, quad channel does indeed add latency, also with your example with AIDA64 shows clockrates change the results. Fact is that GDDR5 does adds latency over DDR3 you cant beat around the bush, Is it a game changer no but every little bit helps. Do we have proof and specs that suggest that Sony/AMD added more cpu cache to the jaguar to overcome and smooth out the GDDR5 inherit design?

Even this shows something about the possibility that GDDR5 has something partly to do with difference in numbers ie 14% difference . With a 150 mhz gain your looking at what maybe 10% gain overall with the cpus. You can see where GDDR5 really helps the gpu while the DDR3 chokes the other gpu.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#202  Edited By btk2k2
Member since 2003 • 440 Posts

@04dcarraher said:

@btk2k2 said:

That graph is showing latency as a function of clock cycles. Yes in that scenario the cycle latency for GDDR5 is higher the DDR3 but because GDDR5 has more clock cycles the latency as a function of time is similar for both as the Hynix data sheets have shown.

Now I am looking at some latency benchmarks for Quad vs Dual channel memory on the X79 motherboard and there is no difference in latency as you can see here (picture on left). This is comparing 2x 64bit buses to 4x 64bit buses and is showing no difference in latency at all. I see no reason to assume that 8x 32bit buses would incur a latency penalty when going from 2 to 4 64bit buses does not.

It is blatantly obvious though that when L1 cache has lower latency than L2 cache which has lower latency than L3 cache which has lower latency than system memory. Why do you think server CPUs have such large L3 cache sizes compared to the desktop counterparts, it is to avoid trips to main memory. These CPUs are the same spec so if one has to make a trip to main memory, so does the other and the overall trip time is practically the same for both systems.

This PS4 has higher memory latency myth has to stop, it is bogus and I have debunked it many times now and it is just getting tiresome.

the Hynix data sheets show only the modules not the controllers as far as Ive seen.... Also are you even looking at charts correctly that you linked? Your comparing quad channel vs dual channel and quad is solely for bandwidth needs. And if you look at the dual vs quad results with Sandra, quad channel does indeed add latency, also with your example with AIDA64 shows clockrates change the results. Fact is that GDDR5 does adds latency over DDR3 you cant beat around the bush, Is it a game changer no but every little bit helps. Do we have proof and specs that suggest that Sony/AMD added more cpu cache to the jaguar to overcome and smooth out the GDDR5 inherit design?

Even this shows something about the GDDR5 has something to do with difference in numbers ie 14% difference . With a 150 mhz gain your looking at what maybe 10% gain overall.

How do you think they test the memory modules? They cannot do it in a vacuum, it requires them to be connected to a memory controller of some description to enable a test in the first place.

Each memory channel has a 64 bit bus, you stated that more channels add latency yet these graphs do not show that when going from 2 channels to 4 channels. The Xbox One uses 4 channels so is effectively Quad channel and the PS4 uses 8 channels so is effectively Octo channel, now there are no Octo channel CPUs or motherboards out in the wild but based on the fact that going from Dual to Quad does not show an increase in latency it is not unreasonable to think the same holds true when going from Quad to Octo.

Sisoft Sandra on the X79 platform shows a 0.2ns increase in latency going from Dual to Quad, that is within the margin of error. The X79 is showing worse latency than Z68 even when both are using a Dual channel configuration so a cross platform comparison is not valid.

The Xbox One has 30GB/s of memory bandwidth vs the PS4 having 20GB/s of memory bandwidth for the CPU, it is entirely possible that the extra bandwidth here is causing the > clockspeed scaling of the Xbox One CPU. It is also possible that in this specific scenario the Xbox One API has slightly less CPU overhead also leading to > clockspeed scaling. Without knowing the details of the test there is no way to know the exact cause of the > clockspeed scaling the Xbox One is showing.

There is no evidence to support the idea that GDDR5 is higher latency than DDR3. There is however evidence to support the idea that the latency for both is about the same. You need to start providing actual proof of your claims rather than supposition.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#203 tormentos
Member since 2003 • 33784 Posts

@FastRobby said:

Same links as always, already said why it doesn't matter... Witcher dev doesn't use DX12. Nice try, go quickly to your bookmarks for other links, or maybe you have those pieces of text just in text files so you can copy/paste the stuff you care about, and ignore the other parts. You are the laughing stock of the forum.

So basically same denial as always..hahha

DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware.

This is from MS it self..hahaha so what is your excuse.?

Do you see how he say allowing CONSOLES STYLE DIRECT ACCESS TO THE GPU.?

Hahahaaaaaaaaaaa.

For the xbox one DX12 is been there done that..

@btk2k2 said:

Normally that would be entirely accurate but the Xbox One API is closer to the PC Spec of DX11 than I thought. The Metro devs talk about it in their interview with Digital Foundry. There are closer to the metal and custom API options you can use on Xbox One but that is more dev intensive and requires a lot more work. That is possibly how MS were able to get both Destiny and Diablo 3 upto 1080p by using those tools instead of the standard DX11 SDK. What DX12 will do is make those custom API tools a lot easier and quicker for devs to use so they can be coding closer to the metal without the extra dev time that is currently required.

It will not mean a huge jump up in performance because it shifts the baseline so what it will probably mean is that AI in games is improved or the CPU does some more physics or you get more on screen enemies or whatever else the devs think will enhance their game with the extra CPU runtime.

Well Diablo 3 runs at 1080p 60FPS on a 7770 so the xbox one should run it to,without having to got to the metal which the 7770 isn't going on PC.

I also make a thread about DX12 been the xbox one API ported to PC.

With actual quoted from MS directly talking about how they would bring to PC gains they have on consoles for years.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#205  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@btk2k2 said:

Each memory channel has a 64 bit bus, you stated that more channels add latency yet these graphs do not show that when going from 2 channels to 4 channels. The Xbox One uses 4 channels so is effectively Quad channel and the PS4 uses 8 channels so is effectively Octo channel, now there are no Octo channel CPUs or motherboards out in the wild but based on the fact that going from Dual to Quad does not show an increase in latency it is not unreasonable to think the same holds true when going from Quad to Octo.

Sisoft Sandra on the X79 platform shows a 0.2ns increase in latency going from Dual to Quad, that is within the margin of error. The X79 is showing worse latency than Z68 even when both are using a Dual channel configuration so a cross platform comparison is not valid.

The Xbox One has 30GB/s of memory bandwidth vs the PS4 having 20GB/s of memory bandwidth for the CPU, it is entirely possible that the extra bandwidth here is causing the > clockspeed scaling of the Xbox One CPU. It is also possible that in this specific scenario the Xbox One API has slightly less CPU overhead also leading to > clockspeed scaling. Without knowing the details of the test there is no way to know the exact cause of the > clockspeed scaling the Xbox One is showing.

There is no evidence to support the idea that GDDR5 is higher latency than DDR3. There is however evidence to support the idea that the latency for both is about the same. You need to start providing actual proof of your claims rather than supposition.

lol thinking that going from a 64 bit per channel to eight 32 bit channels isnt going to add latency.... 30 vs 20 for bandwidth does not make difference for the cpu. Even AMD's top tier cpu's that are multiple times faster dont see any real difference between DDR3 at 21GB/s vs 34GB/s. No evidence? please its already been explained.... and yes using GDDR5 does create more latency then DDR3. You need to start understanding how GDDR5 works and if you anything you would know GDDR5 memory setups create more stalls in the data stream aka latency.

Its a fact that GDDR5 latency is more then DDR3 because of the multiple memory controllers. Your looking at the bare modules themselves, when your using eight 32bit dual 16's input/output, the purpose of so many is to give constant bursts of speed allowing high amount of bandwidth. This creates the latency since you have multiple controllers having to read multiple dram chips then combining that data into the gpu This is were you get the stalls or pauses aka latency, and because of the parallel nature of gpus they can go to the next task instantly out of the order and able to go back when the stall is over. Memory controllers for CPUs are wider and operate more in a linear fashion.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#206  Edited By btk2k2
Member since 2003 • 440 Posts
@04dcarraher said:

@btk2k2 said:

Each memory channel has a 64 bit bus, you stated that more channels add latency yet these graphs do not show that when going from 2 channels to 4 channels. The Xbox One uses 4 channels so is effectively Quad channel and the PS4 uses 8 channels so is effectively Octo channel, now there are no Octo channel CPUs or motherboards out in the wild but based on the fact that going from Dual to Quad does not show an increase in latency it is not unreasonable to think the same holds true when going from Quad to Octo.

Sisoft Sandra on the X79 platform shows a 0.2ns increase in latency going from Dual to Quad, that is within the margin of error. The X79 is showing worse latency than Z68 even when both are using a Dual channel configuration so a cross platform comparison is not valid.

The Xbox One has 30GB/s of memory bandwidth vs the PS4 having 20GB/s of memory bandwidth for the CPU, it is entirely possible that the extra bandwidth here is causing the > clockspeed scaling of the Xbox One CPU. It is also possible that in this specific scenario the Xbox One API has slightly less CPU overhead also leading to > clockspeed scaling. Without knowing the details of the test there is no way to know the exact cause of the > clockspeed scaling the Xbox One is showing.

There is no evidence to support the idea that GDDR5 is higher latency than DDR3. There is however evidence to support the idea that the latency for both is about the same. You need to start providing actual proof of your claims rather than supposition.

lol thinking that going from a 64 bit per channel to eight 32 bit channels isnt going to add latency.... 30 vs 20 for bandwidth does not make difference for the cpu. Even AMD's top tier cpu's that are multiple times faster dont see any real difference between DDR3 at 21GB/s vs 34GB/s. No evidence? please its already been explained.... and yes using GDDR5 does create more latency then DDR3. You need to start understanding how GDDR5 works and if you anything you would know GDDR5 memory setups create more stalls in the data stream aka latency.

Its a fact that GDDR5 latency is more then DDR3 because of the multiple memory controllers. Your looking at the bare modules themselves, when your using eight 32bit dual 16's input/output, the purpose of so many is to give constant bursts of speed allowing high amount of bandwidth. This creates the latency since you have multiple controllers having to read multiple dram chips then combining that data into the gpu This is were you get the stalls or pauses aka latency, and because of the parallel nature of gpus they can go to the next task instantly out of the order and able to go back when the stall is over. Memory controllers for CPUs are wider and operate more in a linear fashion.

Based on the evidence that I can find adding more memory channels has no impact on latency ergo 2,4,8 channels makes no difference. There are certain scenarios that are bandwidth limited, even on CPUs, it is possible that this cloth physics system is one such scenario but without more information on the test it is speculation. You keep stating that its a fact that GDDR5 is higher latency than DDR3. If it is such a fact show me the numbers, show me the charts, the manufacturer datasheets, the benchmarks that prove this to be the case. Stating it is a fact does not make it true no matter how many times you say it. Until you show me actual figures to support your statement I am not going to discus this further. You need to stop talking and start actually showing the data because so far I am the only one to provide data for my claims.

Avatar image for 04dcarraher
04dcarraher

23832

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#207  Edited By 04dcarraher
Member since 2004 • 23832 Posts

@btk2k2 said:
@04dcarraher said:

@btk2k2 said:

Based on the evidence that I can find adding more memory channels has no impact on latency ergo 2,4,8 channels makes no difference. There are certain scenarios that are bandwidth limited, even on CPUs, it is possible that this cloth physics system is one such scenario but without more information on the test it is speculation. You keep stating that its a fact that GDDR5 is higher latency than DDR3. If it is such a fact show me the numbers, show me the charts, the manufacturer datasheets, the benchmarks that prove this to be the case. Stating it is a fact does not make it true no matter how many times you say it. Until you show me actual figures to support your statement I am not going to discus this further. You need to stop talking and start actually showing the data because so far I am the only one to provide data for my claims.

With your "evidence" , you haven't shown anything showing that GDDR5 actually does not add latency ..... And yes going from 2 to 4 channel does add latency even with 64 bit per channel with DDR3 as shown with the site you provided. Its a fact get over it, GDDR5 with its multiple memory controllers add latency. Its common knowledge that latency of GDDR5 is higher then DDR3, Most people dont know its because of the amount of controllers not the modules themselves. But the its not a deal breaker, but it can hurt certain tasks performance compared to DDR3.

"Why don't PCs have DDR5 as system RAM? There are two reasons. For one, DDR3 actually has lower latency than DDR5, making it better for a CPU's quick-access general tasks. And while its bandwidth is a fraction of DDR5's, it is still more than adequate to provide data to the CPU. If we replaced it with GDDR5 we might actually see reduced CPU performance due to DDR5's increased latency."

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#208 tormentos
Member since 2003 • 33784 Posts

@FastRobby said:

What denial? I'm directly debunking your statement... So now you give another link because I already owned you on the other one... Yes I see console-style direct access... Where am I saying that this isn't the case? I'm not claiming that DX12 will increase performance by 100%, I'm not saying it will have a MAJOR impact. I'm saying it WILL have an impact, I don't know how much, because I'm not a developer with early access. Are you? No you are not. Microsoft also already said that DX12 will improve multi-threading of the Xbox One CPU. That is called an improvement, is it not? Your little text is saying NOTHING, it's about PC, and it's just saying that you can closer to the metal like with Xbox One. Nowhere it is saying that it's exactly like the Xbox One, it's just saying it's the same style. You'll see that DX12 will improve the Xbox One, and you can go cry in your parents basement.

You onwed shit i used a developer telling you to your face that DX12 will do nothing.

It may be able to run a few more triangle but since the weak sauce GPU in the xbox can't shade them it mean shit..hahaha

It will have nothing because you refuse like a moron to understand that DX12 is already on xbox one since day one DX12 was on xbxo 360..hahaha But but but DX12 doesn't release until 2015 how is that possible,DX12 = CONSOLES API ON PC.

Stated by MS it will bring gains on PC because PC didn't have those gains,oh and on games where an intel i7 can be CPU bound because other wise you will see no difference it would help lesser CPU.

Both the Xbox One and PlayStation 4 have APIs for accessing the GCN architecture directly, so Mantle in itself isn’t needed for that. The main advantage Mantle gives us is the ability to have console-like performance, particularly in batch performance, on the PC. At AMD’s recent developer summit, Oxide demonstrated a PC running at over 100,000 batches per frame. Before now, this type of performance on a PC was unheard of.

http://gamingbolt.com/ps4-xbox-one-dont-need-mantle-to-boost-visuals-api-access-to-gcn-architecture-already-available#3BYFRU3Kw80v9L97.99

Mantle = DX12 = Console APi.

That last part is the focus of DirectX 12. The new version of DirectX, and the key Direct3D drivers underlying it, wants to give developers the ability to "fully exploit the CPU," Gosalia said. DirectX 12 will be the first version of the API to "go much lower level" allowing console-style direct access to GPU system calls through APIs mapped specifically to a wide range of hardware. "With DirectX 12, we want the fastest way to exploit the GPU," he said.

http://arstechnica.com/gaming/2014/03/microsoft-touts-performance-improvements-for-existing-hardware-in-directx-12/

That was on march where MS showed DX12 for the first time it was on PC no xbox one,and yeah i am right and the only ownage you do if of your self...

While you keep making shit up the PS4 keep release game in superior form to the xbox one,evil within basically was 720p on xbox...lol

Avatar image for SonySoldier-_-
SonySoldier-_-

1186

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#209 SonySoldier-_-
Member since 2012 • 1186 Posts

Everyone has been trying to tell lemmings this for so long now. But they were too dense to realize it.

But now we hear it from the head honcho himself.

This is a very hard pill for the lemmings to swallow.

TLHBO.

Avatar image for btk2k2
btk2k2

440

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#210 btk2k2
Member since 2003 • 440 Posts

@04dcarraher said:

@btk2k2 said:s th
@04dcarraher said:

@btk2k2 said:

Based on the evidence that I can find adding more memory channels has no impact on latency ergo 2,4,8 channels makes no difference. There are certain scenarios that are bandwidth limited, even on CPUs, it is possible that this cloth physics system is one such scenario but without more information on the test it is speculation. You keep stating that its a fact that GDDR5 is higher latency than DDR3. If it is such a fact show me the numbers, show me the charts, the manufacturer datasheets, the benchmarks that prove this to be the case. Stating it is a fact does not make it true no matter how many times you say it. Until you show me actual figures to support your statement I am not going to discus this further. You need to stop talking and start actually showing the data because so far I am the only one to provide data for my claims.

With your "evidence" , you haven't shown anything showing that GDDR5 actually does not add latency ..... And yes going from 2 to 4 channel does add latency even with 64 bit per channel with DDR3 as shown with the site you provided. Its a fact get over it, GDDR5 with its multiple memory controllers add latency. Its common knowledge that latency of GDDR5 is higher then DDR3, Most people dont know its because of the amount of controllers not the modules themselves. But the its not a deal breaker, but it can hurt certain tasks performance compared to DDR3.

"Why don't PCs have DDR5 as system RAM? There are two reasons. For one, DDR3 actually has lower latency than DDR5, making it better for a CPU's quick-access general tasks. And while its bandwidth is a fraction of DDR5's, it is still more than adequate to provide data to the CPU. If we replaced it with GDDR5 we might actually see reduced CPU performance due to DDR5's increased latency."

My evidence shows that going from 2 to 4 channels does not add latency so you can easily infer that going to 8 channels would also not increase latency. Nowhere on the graphs that I have provided does it show that 2 -> 4 channels increases latency. Going from Z68 to X79 does but 2 - 4 on the same platform does not. You keep saying it is the amount of controllers, show evidence for this, show actual evidence to back up what you consider 'common knowledge'.

You do realise that AMD looked into using GDDR5 with their Kaveri APU but the problems were 3 fold, 1) Coming up with a standard GDDR5 module was not that easy and the memory manufacturers were not too keen, 2) it would have been expensive and while it would have increased performance on the GPU the additional cost was not worth the speed increase and 3) getting motherboard manufacturers to want to make motherboards that support it. It had nothing to do with latency.

Avatar image for StormyJoe
StormyJoe

7806

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#212 StormyJoe
Member since 2011 • 7806 Posts

@tormentos: Umm... same date as your article: Linkey.

"Microsoft: DX12 will improve Xbox One performance"

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#214 tormentos
Member since 2003 • 33784 Posts

@StormyJoe said:

@tormentos: Umm... same date as your article: Linkey.

"Microsoft: DX12 will improve Xbox One performance"

With this new update (that is due out in preview form later this year), developers will have more direct control over how hardware renders in-game visuals. This could lead to a boost in resolution, frame rate, and all around graphical fidelity.

http://venturebeat.com/2014/03/20/xbox-one-will-see-improved-performance-with-directx-12-according-to-microsoft/

That is based on this ^^ it was when MS try to piggy back ride the gains on PC to the xbox one which was shortly shutdown.

Including The witcher 3 developers who shut it down.

@FastRobby said:

Neglecting stuff I said, just to keep on crying about: "DX12 doesn't do anything for Xbox One, look at this link, and this link". From your own link, Witcher 3 dev: "Developers will be able to push more triangles". Do you know what that is called? It's called an improvement, owned by your own link. Nice one Tormentos, idiot.

And also, like I have been saying all the time:

https://twitter.com/MalwareMinigun/status/453420457165737984

DX12 will improve the CPU, better multi-threading, aka improvement. Bu-bu-bu-bu, cry cry

If not my fault that you are a moron who can't read,pushing more triangles but not been able to shade them defeat the whole purpose you can't use them,which is what the Witcher developer stated,the xbox one weak sauce GPU can't shade the extra triangles so it is useless.

Kind of like the PS2 moving 75 million polygons but been able to use 66 million only.

That twiter link doesn't prove anything and is pull from the same article where several developer talk about DX12 so call gains were some even make fun of the claim and others actually ask why would MS would claim on record something like that.

DX12 will do shit what it bring to PC has been on consoles for year and if you were not a totally miss informed moron on this topic and what the hell DX12 really is you will know it..

Avatar image for StormyJoe
StormyJoe

7806

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#215  Edited By StormyJoe
Member since 2011 • 7806 Posts

@tormentos: Sure, sure. MS is always the liar, eh?

Avatar image for blaznwiipspman1
blaznwiipspman1

16591

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#216 blaznwiipspman1
Member since 2007 • 16591 Posts

who really cares....graphics are good enough now a days.

Avatar image for FoxbatAlpha
FoxbatAlpha

10669

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#217 FoxbatAlpha
Member since 2009 • 10669 Posts

I see people are still owning Tormatoes. His lobotomy has left his comprehension severely impaired.

DX12 does nothing??? NOTHING? You wish.

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#218  Edited By tormentos
Member since 2003 • 33784 Posts

@StormyJoe said:

@tormentos: Sure, sure. MS is always the liar, eh?

Lower CPU over head will not make your games run at higher resolutions,resolution is up to the GPU not CPU,what can affect is your frames per second and things on screen at once.

There’s been a lot of talk about the Xbox One’s apparent struggle to hit 1080p resolutions in comparison to the PS4.

There’s also been a lot of talk about the extent to which the Xbox One’s eventual DirectX 12 upgrade can improve the console’s performance.

However, The Witcher 3’s lead engine programmer Balazs Torok says that those who think that the Xbox One’s DX 12 upgrade will make it easier to hit 1080p resolutions are misinformed.

I think there is a lot of confusion around what and why DX12 will improve,” Torok told GamingBolt.

“Most games out there can’t go 1080p because the additional load on the shading units would be too much. For all these games DX12 is not going to change anything,” Torok explained.

“They might be able to push more triangles to the GPU but they are not going to be able to shade them, which defeats the purpose,” he continued.

http://www.nowgamer.com/dx-12-wont-fix-xbox-ones-1080p-issues-says-the-witcher-3-dev/

Just like they lie about the cloud,Tile Resources,diminishing return of stronger GPU,and several other things.

@FoxbatAlpha said:

I see people are still owning Tormatoes. His lobotomy has left his comprehension severely impaired.

DX12 does nothing??? NOTHING? You wish.

lolsy...

Avatar image for deactivated-59d151f079814
deactivated-59d151f079814

47239

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#219 deactivated-59d151f079814
Member since 2003 • 47239 Posts

@blaznwiipspman1 said:

who really cares....graphics are good enough now a days.

Pretty much this.. Last gen we saw a sizable leap in which consoles intergrated multiplayer and released games with scope that consoles like the xbox, gamecube, and ps2 just couldn't do.. Where games like Deadrising were releasing with much larger and detailed environments for instance that was interactable and changing.. We aren't seeing it this gen.. IN fact all we are seeing are games being released that could have been developed on last gen consoles with no real downgrade or changes in gameplay.. I have yet to see hardware pushing forward any kind of gameplay adaptions.. In fact from what I have seen are the exact opposite, while AAA games are stagnating and recycling in gameplay, the indie market is booming in reviving once dead genres, to pushing new gameplay ideas..

Graphics have reached a reducing return when they are providing us the exact same gameplay experience we have been seeing for years now in many genres.. Basically the last game I was impressed with graphically was Crysis 1..

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#221 tormentos
Member since 2003 • 33784 Posts

@FastRobby said:

Back to ignoring you, it will only be for a year, after that you probably won't dare showing your face here again. DX12 won't do anything for Xbox One, LOL. Why would they even bring it to the Xbox One then, if it doesn't even improve ANYTHING? For lols?

Oh please not the silence punishment not that,please don't ignore me..hahaha

You can't call it bringing when it has always been there..lol

The real question is will you still be posting when DX12 fail to live to the hype,and games continue to come in inferior flavor on XBO...lol