More evidence shows nVidia GPU's losing performance against AMD over time and how Gameworks is damaging

This topic is locked from further discussion.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#51 waahahah
Member since 2014 • 2462 Posts

@ronvalencia said:

Access to the source code enables NVIDIA to have a quick turn around as per Tomb Raider 2013's case.

Ashes of Singularity's equal source code access for Intel, AMD and NVIDIA gives transparency.

MIT License is similar to Simplified BSD license used by FreeBSD. Sony's PS4 OS uses FreeBSD.

3rd party MIT Licensed open source software is used in Intel's server products.

NVIDIA already has optimised TressFX code path. It's pretty easy to run multiple code paths by checking GPU vendor ID.

NVIDIA Hairwork runs fine on AMD hardware if the users/news media remembered the driver settings from Crysis 2 NVIDIA edition (excess tessellation on flat surfaces are just anti-consumer tactic from NVIDIA).

Well they only said they got final game code, not source. Either way amd still caused them to scramble post release because they couldn't profile their driver/game together. And I doubt that having source code increased the response. It might have helped if they needed to help the tomb raider developers fix their end.

Ashes of Singularity's... developer opening up and focusing on customer experience...

Having multiple TressFX libraries could go badly down the road though.. especially for developers. Now there is a potential for 3 libraries, TressFX Nvidia, TressFx AMD, maybe TressFx Intel. What good is a SDK if it ends up splitting like this? It's probably not bad now but if they diverge more... and again, NVidia has no authority for designing new features into it. Which is why their Hairworks will stay around. This is great for software support and regression testing... This is a great example of what's wrong with open source. Conflict resolution usually ends with multiple libraries...

Nvidia didn't develop Crysis 2, that's completely developer's fault. Tessellation performs worse on AMD hardware so tesselating flat surfaces probably going to be worse on amd hardware.

If there is any foul here its the developers not making sure the game runs well on both platforms before releasing. Or using a technology that is known to be worse on another platform as part of a core technology like project cars and physx.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#52 waahahah
Member since 2014 • 2462 Posts
@jereb31 said:

No the development of their hairworks tesselation tech does not make them bad, it's how they appear to be wielding it that makes them bad.

So Nvidia develops no solutions (like Mantle) and is the good guy.

AMD develops a solution that it was going to remain in control of, but still allow everyone to use for FREE and implement updates based on developer input is the bad guy.

Yes, TressFX doesn't use tessellation, bad reference on my part, but it is the competing tech to Hairworks, works as well and in cases better than Hairworks and is open. Hairworks, only works better on Nvidia, and is not open for AMD to optimise.

I have an idea, Nvidia (AMD) developed Hairworks (Mantle) and is in full control of Hairworks (Mantle). Nvidia (AMD) has no forseable intention to ever release Hairworks (Mantle) to open source.Can you not see the double standard. Feel free to replace Hairworks with G-Sync, Gameworks or Physx if you like.

Developers have to pay an undisclosed extra amount for access to gameworks source code, IF the source code has been flagged for release. I have been noticing that articles refer to Nvidia keeping the source code under wraps for some time but allowing the use within gameworks. I'll see if I can find the link again.

I'm not saying free is bad, but their control over the API to gain an advantage over NVidia is basically the same as nvidia developing proprietary sdk's that are designed for nvidia's hardware strengths to gain an advantage. They aren't the bad guy for trying this, but they aren't so pro everyone as they try to appear to be. They obviously didn't work with nvidia at the beginning, mantle was NEVER a public api or a collaboration with different hardware vendors.

I don't have double standards, I'm just not praising nvidia or amd for their decisions. They both are focused on pushing their hardware. NVidia has a much better reputation currently and much better market share so their focus on developing these extra technologies as a product they can sell is completely fine. Having that product helps developers, selling it probably will allow them to allocation far more resources to continue development on it. Source licensing is paid for, they have varying degrees of licensing models. There's nothing wrong with paying for a product, that's how a lot of developers at nvidia can continue to work there.

Amd's free/open policy is completely fine. They don't have as much market share and because they don't sell the technologies can potentially try to leverage the open source cummunity to help develop the technology. If they don't make money off it though it might not be as well supported in the future.

Avatar image for jereb31
Jereb31

2025

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#53 Jereb31
Member since 2015 • 2025 Posts

@waahahah said:
@jereb31 said:

No the development of their hairworks tesselation tech does not make them bad, it's how they appear to be wielding it that makes them bad.

So Nvidia develops no solutions (like Mantle) and is the good guy.

AMD develops a solution that it was going to remain in control of, but still allow everyone to use for FREE and implement updates based on developer input is the bad guy.

Yes, TressFX doesn't use tessellation, bad reference on my part, but it is the competing tech to Hairworks, works as well and in cases better than Hairworks and is open. Hairworks, only works better on Nvidia, and is not open for AMD to optimise.

I have an idea, Nvidia (AMD) developed Hairworks (Mantle) and is in full control of Hairworks (Mantle). Nvidia (AMD) has no forseable intention to ever release Hairworks (Mantle) to open source.Can you not see the double standard. Feel free to replace Hairworks with G-Sync, Gameworks or Physx if you like.

Developers have to pay an undisclosed extra amount for access to gameworks source code, IF the source code has been flagged for release. I have been noticing that articles refer to Nvidia keeping the source code under wraps for some time but allowing the use within gameworks. I'll see if I can find the link again.

I'm not saying free is bad, but their control over the API to gain an advantage over NVidia is basically the same as nvidia developing proprietary sdk's that are designed for nvidia's hardware strengths to gain an advantage. They aren't the bad guy for trying this, but they aren't so pro everyone as they try to appear to be. They obviously didn't work with nvidia at the beginning, mantle was NEVER a public api or a collaboration with different hardware vendors.

I don't have double standards, I'm just not praising nvidia or amd for their decisions. They both are focused on pushing their hardware. NVidia has a much better reputation currently and much better market share so their focus on developing these extra technologies as a product they can sell is completely fine. Having that product helps developers, selling it probably will allow them to allocation far more resources to continue development on it. Source licensing is paid for, they have varying degrees of licensing models. There's nothing wrong with paying for a product, that's how a lot of developers at nvidia can continue to work there.

Amd's free/open policy is completely fine. They don't have as much market share and because they don't sell the technologies can potentially try to leverage the open source cummunity to help develop the technology. If they don't make money off it though it might not be as well supported in the future.

Mantle was a collaboration between AMD and Dice developers initially. While propriety it was free for use. The fact that anyone could use it, even the competition at no cost paints them in a better light than Nvidia.

You do seem to be showing double standards, if that's not your intention ok, but you are chastising one company whilst sparing the other.

Personally I think Nvidia has a reputation for producing good graphics cards and being anti-competitive. They definitely have the market share, no question.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#54  Edited By ronvalencia
Member since 2008 • 29612 Posts

@waahahah:

Continuing from http://www.gamespot.com/forums/system-wars-314159282/hermits-did-you-guys-know-this-all-along-nvidia-ja-32902654/

@waahahah:

Again vulkan is a continuation of mantle, but no longer mantle, but this doesn't address the issue we are actually discussing. Let me remind you, AMD wanted to keep the API before making it open source.

Let me remind you AMD/EA-DICE wanted Mantle to be an open standard. AMD haven't open sourced Mantle binary blob. That's the difference.

Mantle for open standard will not exist since it's tinted with MS HLSL.

Mantle for non-GCN has to be modified hence Vulkan was to be modified for better cross-vendor support e.g. AMD is not technically aware of Kelper's hardware issues.

Switching between APIs doesn't remove hardware limitations.

@waahahah:

Again your missing the point. NVIDIA optimizing for is because they have no choice. They will not promote or use it in any of their games. And again so long as AMD holds the licenses so Intel will likely get their own solution. Open source is still subject to change, it's not a permanent thing.

It's you who missed the point i.e. your NVIDIA's promotion of TressFX argument has nothing to do with being open and transparency.

Your "Intel will likely get their own solution" assertion is your unsupported speculation and it's flawed.

MIT License is for the current software version i.e. companies like Sony can roll their kit-bash version i.e. the requirement is to place copyright notices with the shipped products.

@waahahah:

Under a new branding. It means you will no longer be able to use new iterations of mantle and you'd have to get developer support. Also MIT license doesn't grant users the ability to change specification. Any changes will be extensions and probably difficult to get developer support. Mantle was NOT a collaboration project for the people that would need to use it. The licensing model doesn't really help them at all at this point, mantle needs to be standard for developers, if one day you can no longer use the standard, or make modifications to it, it no longer is standard and will less likely be supported by developers.

The MIT license really doesn't help an open standard, I can't make modifications to it because it will no longer be standard, and I'll have to fight AMD to get changes submitted as the standard.

Such arguments has nothing to do with being open and transparency.

Does NVIDIA Gameworks source code license enable the developer to sub-license/show to 3rd party entities? The answer is NO.

Does MIT licensed source code enable the developer to sub-license/show to 3rd party entities? The answer is YES.

For source code access, does NVIDIA Gameworks source code cost non-discriminatory? The answer is NO. NVIDIA has stated "case by case" for source code access cost.

For source code access, does MIT licensed source code non-discriminatory? The answer is YES.

@waahahah:

No, you do not understand the argument or the issue we are talking about. Extensions are not mantle, mantle was intended to be proprietary from the beginning. Only making extensions for OpenGL reaffirm my argument that the Mantle was proprietary, was intended to be proprietary, but for public use.

From the very beginning,

Both NVIDIA and AMD OpenGL kit-bash was rejected.

Remember, OpenGL has GLSL in-place of MS HLSL. Mantle uses MS HLSL.

@waahahah:

You're missing the points entirely. Early 2014 AMD's plan was to provide the API and control the license model for it. It was going to be public which is a good thing but it was intended to be theirs. But they were struggling to get developer participation. I'll say this again... 12 games.

NVidia's kit bash served 1 purpose in my argument, from the perspective of developers, there were more standard alternatives.

Such arguments has nothing to do with being open and transparency.

OpenGL kit-bash has failed and worst than Mantle's results.

OpenGL kit-bash didn't stop calls for new APIs from Khronos Group i.e. the OpenGL kit-bash alternative wasn't good enough.

@waahahah:

DX12/Mantle are both probably a result of AMD/MS collaboration on xbox one. But who cares where it came from. Again from a developers perspective, there were more standard alternatives now.

OpenGL kit-bash has failed and worst than Mantle's results.

OpenGL kit-bash didn't stop calls for new APIs from Khronos Group i.e. the OpenGL kit-bash alternative wasn't good enough.

@waahahah:

You are still missing the point. Mantle was only given to Khronos after it failed to get developer support to make it viable. AMD holding on to a dead technology was pointless so the only thing they could do was give it up to a 3rd party that intel/NVidia and developers would support.

OpenGL kit-bash has failed and worst than Mantle's results.

OpenGL kit-bash didn't stop calls for new APIs from Khronos Group i.e. the OpenGL kit-bash alternative wasn't good enough. What the developers want is PS4's API and Mantle is the closest solution.

Avatar image for mirgamer
mirgamer

2489

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#56 mirgamer
Member since 2003 • 2489 Posts

@n64dd said:

AMD has shit drivers. I stay clear of them at all costs.

I use both, leaning more towards Nvidia but its not entirely straight forward. It can be argued that the reason for that is that AMD do not have Gameworks code so they have to literally work in the blind to come out with patches with games thats optimised for said Gameworks. Personally, I encountered problems with both. The lastAMD card I purchased, the 7950 still runs very very well for its age.

I wish people stop saying stupid shit like you, who don't seem to grasp that it is in their interests that Nvidia and AMD stay in healthy competition with each other.

Avatar image for nutcrackr
nutcrackr

13032

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 1

#57 nutcrackr
Member since 2004 • 13032 Posts

That performance jump is huge, but why did the nvidia performance tank so badly?

Avatar image for unrealgunner
UnrealGunner

1073

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#58 UnrealGunner
Member since 2015 • 1073 Posts

2016 will be AMD's year that is my predciction

Avatar image for KungfuKitten
KungfuKitten

27389

Forum Posts

0

Wiki Points

0

Followers

Reviews: 42

User Lists: 0

#59  Edited By KungfuKitten
Member since 2006 • 27389 Posts

This puts AMD in a very good position. All they will need to do is be more honest and open about things and dig up any dirt they have on NVIDIA.

If AMD can improve their driver situation, then a lot of people will switch sides, and if NVIDIA is exposed to sabotage newer popular titles faster, then the consumers will punish them and the developers who are scamming us by collaborating with these practices. I hope we will see more employees (anonymously) expose their bosses asking them to do illicit things like keeping in complex meshes to harm performance in favor of a hardware company.

In light of TTIP and further corruption of the higher ups it is of the essence that we give whistle-blowing more room and respect, and harsher, not necessarily monetary repercussions for the higher ups.


If NVIDIA is employing Apple tactics of planned obsolescence does this mean the new Pascal cards will only be good for say 2 years because then they will perform worse than newer mediocre cards? The Polaris might last us a lot longer? That could be something to take into consideration now.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#60 waahahah
Member since 2014 • 2462 Posts

@ronvalencia

Alot of what you posted either doesn't matter or is incorrect. AMD never intended for an OPEN api structure but a "Public" api. It's their words. Even with MIT license behind NVidia can't just change mantle to suite their needs with fundamentally undermining mantle being a standard.

The facts are, AMD did not allow participation from nvidia/intel in making mantle. This completely negates any PR bullshit from AMD stating they are pro open standards...

MIT licenses aren't bad but creating standards... they don't matter. Intel/NVidia might be free to use TressFX but they are still not free relying on AMD to accept changes they want. AMD can easily develop TressFX behind closed doors without NVida/Intel input. It means that AMD can force other vendors to be behind the curve. NVidia/Intel can make their own versions of TressFX to add features but now we are again undermining a libraries purpose to help developers target 1 backend not 3. Because AMD is the controlling party Intel will inevitably roll with their own version eventually. Even if it's based on TressFX it will likely diverge to work with their hardware better. If you haven't paid attention to the open source communities... this is how most projects tend to go unless the project is in a truly unbiased party. Open Source tends to have poor conflict resolution between parties.

Look at some of the things you posted, their libraries are supported better with their hardware? How can we get mad at NVIdia for using tessellation heavily when their video cards perform better with it?

MIT licenses will allow developers to try out the tech risk free, in the end AMD's tactics are designed to gain developer support while attempting to stay ahead of the competition by being the owner of those "Open Source" projects. AMD has proved if it can retain ownership and authority of a project it will.

Avatar image for waahahah
waahahah

2462

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 5

#61 waahahah
Member since 2014 • 2462 Posts

@jereb31 said:

Mantle was a collaboration between AMD and Dice developers initially. While propriety it was free for use. The fact that anyone could use it, even the competition at no cost paints them in a better light than Nvidia.

You do seem to be showing double standards, if that's not your intention ok, but you are chastising one company whilst sparing the other.

Personally I think Nvidia has a reputation for producing good graphics cards and being anti-competitive. They definitely have the market share, no question.

Mantle was not free to use early on, only certain developers had access to it. Free isn't always the most important thing especally when companies are probably going to pay for it in support fees.

I'm not sparing NVidia, i'm just being realistic about the situation. Hardware is different, the software stacks both have developed are different. The only way both sides of customers aren't going to be eating shit is when developers use the mot common and well supported features on both platforms, and not use anything special.

It's really up to the developers to make sure all their customers are taken care of. TombRaider NVidia didn't get to test their stuff out so who really suffered? Customers who wanted to play the game? Who's at fault? Square.

AMD and Nvidia are primarily hardware vendors and the software they write is for their hardware. Mantle showed amd is just as likely to try to hold some sort of software over their competitiors. By making it open they just did it with a friendlier face, but again they didn't make it a collaborative effort. They didn't involve other hardware vendors. Had they owned it they would have had the final authority, and ability to hide upcoming features from other vendors, which they literally started the project off by doing just that...

Avatar image for ten_pints
Ten_Pints

4072

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#62  Edited By Ten_Pints
Member since 2014 • 4072 Posts

Well all I get from this is Nvidia is playing on AMD hardware weaknesses to insane levels of tessellation, but by doing so Nvidia are also gimping their older cards as well.

But Nvidia cards have their own weaknesses and I think Dx12 and OpenGL(Vulkan) we will start to see this I hope. See how they like it when the shoe is on the other foot.

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#63  Edited By EducatingU_PCMR
Member since 2013 • 1581 Posts

You have to be a moron to get anything NVIDIA other than the 980Ti at this point. And so close to Polaris release, that decision is still questionable.

Avatar image for hobo120
hobo120

29

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#64 hobo120
Member since 2007 • 29 Posts

Using gameworks as a con for a hit in performance is such a desperate attempt to make AMD look better. amd cards dont handle tessellation as well as nvidia newer cards. nvidia older cards not perfoming as well is because of the same reason as amd's. amd and nvidia do the same things in the market promoting bias effects,software, api etc. amd's mantle was not free and only amd used it. nvidia using its gameworks is no different. and just today nvidia released new drivers for this game.

Avatar image for kingtito
kingtito

11775

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#65  Edited By kingtito
Member since 2003 • 11775 Posts

@EducatingU_PCMR said:

You have to be a moron to get anything NVIDIA other than the 980Ti at this point. And so close to Polaris release, that decision is still questionable.

Unless you have lots of disposable income. Then purchasing a 980ti now and then buying a new release once they come out isn't that questionable.

When do the new cards release?

Avatar image for jun_aka_pekto
jun_aka_pekto

25255

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#66  Edited By jun_aka_pekto
Member since 2010 • 25255 Posts

With AMD, the performance of my old HD 5770 got better and better in the four years I owned one even though there were newer AMD cards.

With my two-year old 4Gb GTX 770, I can't help but feel Nvidia is marginalizing the 700 series in favor of the 900 series much earlier than expected unlike with my past Nvidia cards.

Edit:

I'm tempted to switch my MSI 4 GB GTX 770 with the MSI R9 290x in my eldest daughter's PC. I mean, they look identical.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#67 Xtasy26
Member since 2008 • 5582 Posts

@n64dd said:
@jereb31 said:
@waahahah said:

I'm enjoying my NVidia card, I bought and replace after using a 270x for a few months... it had fine performance but the drivers were pretty shit.

I always end up looking on reddit to see the current state of drivers and support for both AMD and Nvidia, not the only source of info but they seem pretty vocal. For the past 6 months or so, it seems all you see are drivers keep crashing xyz or this update gives me BSOD from the Nvidia camp. AMD you get a quite a lot less, more praising the performance increases from the updates and articles talking about upcoming tech.

If i had to choose a product that was trustworthy I would be going with AMD GPU's, at least you won't be supporting a douche bag of a company.

As somebody who is a developer, amd drivers are absolute shit and have been for a long time. If you want to be in denial, that's all you.

I have had brought the HD 4870, HD 6950 BIOS flashed to HD 6970 and now the R9 Fury X. Have had 0 problems with them so far. Not to mention I switched to and AMD/ATI GPU on my gaming laptop after my nVidia Gaming Laptop died due to defective die packaging. Which was the worst piece of crap hardware I brought thanks to nVidia.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#68 N64DD
Member since 2015 • 13167 Posts

@Xtasy26 said:
@n64dd said:
@jereb31 said:
@waahahah said:

I'm enjoying my NVidia card, I bought and replace after using a 270x for a few months... it had fine performance but the drivers were pretty shit.

I always end up looking on reddit to see the current state of drivers and support for both AMD and Nvidia, not the only source of info but they seem pretty vocal. For the past 6 months or so, it seems all you see are drivers keep crashing xyz or this update gives me BSOD from the Nvidia camp. AMD you get a quite a lot less, more praising the performance increases from the updates and articles talking about upcoming tech.

If i had to choose a product that was trustworthy I would be going with AMD GPU's, at least you won't be supporting a douche bag of a company.

As somebody who is a developer, amd drivers are absolute shit and have been for a long time. If you want to be in denial, that's all you.

I have had brought the HD 4870, HD 6950 BIOS flashed to HD 6970 and now the R9 Fury X. Have had 0 problems with them so far. Not to mention I switched to and AMD/ATI GPU on my gaming laptop after my nVidia Gaming Laptop died due to defective die packaging. Which was the worst piece of crap hardware I brought thanks to nVidia.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#69  Edited By Xtasy26
Member since 2008 • 5582 Posts

@waahahah said:
@jereb31 said:
@waahahah said:
@jereb31 said:

I always end up looking on reddit to see the current state of drivers and support for both AMD and Nvidia, not the only source of info but they seem pretty vocal. For the past 6 months or so, it seems all you see are drivers keep crashing xyz or this update gives me BSOD from the Nvidia camp. AMD you get a quite a lot less, more praising the performance increases from the updates and articles talking about upcoming tech.

If i had to choose a product that was trustworthy I would be going with AMD GPU's, at least you won't be supporting a douche bag of a company.

I don't really care about articles, what matters is execution. AMD doesn't have the option to control any of the technology. They've become a pretty bad company, They released mantle and it never matured into anything and gave gave it to a 3rd party. In fact mantle is a good example of them being extremely douchy. They only ever intended to make the API public but they'd retain ownership of the API and pretty much own all design choices. If you bought in to the hype all you got pretty much a dead end product that was never fully realized. Now a third party vendor is making the API general enough for NVidia/intel and developer support will be much better.

I under stand there are always problems with games. But NVidia releases fixes for game extremely fast or have profiles ready release day for major games.

Control the technology?? What like how Nvidia locks out everyone from the tech they develop?? Yeah what a bunch of champions.

So what if Mantle didn't make it, was it not a step in the right direction. AIn the end it has gone to people who will be using that tech to move the industry forward. Better than the deathgrip Nvidia has had on PhysX for the last decade. Or hey better yet, let's all pay extra fro G-Sync because "reasons" when free sync is being toted as better and free. Who were those evil b@stards who were giving that away again?

Wait if AMD are douchey for not relinquishing control of Mantle while still making it free to use, what does that make Nvidia who don't relinquish control and make you pay licensing? I'm picturing someone with a black coat and an evil moustache.

NVidia doesn't though. Companies pay for support and source code from NVidia. They are completely free to work with AMD sorting out their game for AMD hardware, NVidia has even stated they are licensed to share the source code. Secondly NVidia is not being evil.

Loading Video...

Completely false. You are complete clueless. Richard Huddy explains pretty well how Gameworks work. :

"Gameworks is supplied as DLLs. We all know what DLLs are Dynamic Link LIbraries. It's a piece of code which has already been complied which you can't change. That's how it's given to game developers. I have worked with Game Developers for (since 3DLabs)..1996. Ever since 1996 to date I have never had a Games Developer ever ask me for a DLL for games code. They would never ask for that..."

"nvidia is doing something quite different here, they are supplying DLLs. The DLLs are black boxes. They are pieces of code written by nVidia which deliver certain visual effects. When nVidia signs contracts with Game developers they put in clauses where they prevent the developers from sharing code for Gameworks. I have recevied e-mails from ISV's stating, "Hey we would love to work with you guys. But we can't work on the Gameworks stuff".

So, in other words AMD can't optimize for Gameworks as they can't view the code for Gameworks. This is completely opposite of what TressFX as explained by Richard Huddy. As nVidia can view TressFX code and has optimized it, thus you can now get similar performance out of Tomb Raider on nVIdia and AMD hardware.

All AMD can do is make the best possible optimization for Gameworks game without seeing the actual Gameworks DLL code. Which makes their life more difficult and certainly takes more time. It's clear who is the douchebag.

Avatar image for nyadc
NyaDC

8006

Forum Posts

0

Wiki Points

0

Followers

Reviews: 3

User Lists: 5

#70 NyaDC
Member since 2014 • 8006 Posts

@Xtasy26 said:

Completely false. You are complete clueless. Richard Huddy explains pretty well how Gameworks work. :

"Gameworks is supplied as DLLs. We all know what DLLs are Dynamic Link LIbraries. It's a piece of code which has already been complied which you can't change. That's how it's given to game developers. I have worked with Game Developers for (since 3DLabs)..1996. Ever since 1996 to date I have never had a Games Developer ever ask me for a DLL for games code. They would never ask for that..."

"nvidia is doing something quite different here, they are supplying DLLs. The DLLs are black boxes. They are pieces of code written by nVidia which deliver certain visual effects. When nVidia signs contracts with Game developers they put in clauses where they prevent the developers from sharing code for Gameworks. I have recevied e-mails from ISV's stating, "Hey we would love to work with you guys. But we can't work on the Gameworks stuff".

So, in other words AMD can't optimize for Gameworks as they can't view the code for Gameworks. This is completely opposite of what TressFX as explained by Richard Huddy. As nVidia can view TressFX code and has optimized it, thus you can now get similar performance out of Tomb Raider on nVIdia and AMD hardware.

All AMD can do is make the best possible optimization for Gameworks game without seeing the actual Gameworks DLL code. Which makes their life more difficult and certainly takes more time. It's clear who is the douchebag.

Amen.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#71 Xtasy26
Member since 2008 • 5582 Posts

@hobo120 said:

Using gameworks as a con for a hit in performance is such a desperate attempt to make AMD look better. amd cards dont handle tessellation as well as nvidia newer cards. nvidia older cards not perfoming as well is because of the same reason as amd's. amd and nvidia do the same things in the market promoting bias effects,software, api etc. amd's mantle was not free and only amd used it. nvidia using its gameworks is no different. and just today nvidia released new drivers for this game.

Problem is they are over tessellating with little to no improvements in visual quality. As explained in the initial video and by AMD's Richard Huddy.

Avatar image for n64dd
N64DD

13167

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#72 N64DD
Member since 2015 • 13167 Posts

@Xtasy26 said:
@hobo120 said:

Using gameworks as a con for a hit in performance is such a desperate attempt to make AMD look better. amd cards dont handle tessellation as well as nvidia newer cards. nvidia older cards not perfoming as well is because of the same reason as amd's. amd and nvidia do the same things in the market promoting bias effects,software, api etc. amd's mantle was not free and only amd used it. nvidia using its gameworks is no different. and just today nvidia released new drivers for this game.

Problem is they are over tessellating with little to no improvements in visual quality. As explained in the initial video and by AMD's Richard Huddy.

All that matters is the end result.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#73 Xtasy26
Member since 2008 • 5582 Posts

@waahahah said:

@ronvalencia

Alot of what you posted either doesn't matter or is incorrect. AMD never intended for an OPEN api structure but a "Public" api. It's their words. Even with MIT license behind NVidia can't just change mantle to suite their needs with fundamentally undermining mantle being a standard.

The facts are, AMD did not allow participation from nvidia/intel in making mantle. This completely negates any PR bullshit from AMD stating they are pro open standards...

MIT licenses aren't bad but creating standards... they don't matter. Intel/NVidia might be free to use TressFX but they are still not free relying on AMD to accept changes they want. AMD can easily develop TressFX behind closed doors without NVida/Intel input. It means that AMD can force other vendors to be behind the curve. NVidia/Intel can make their own versions of TressFX to add features but now we are again undermining a libraries purpose to help developers target 1 backend not 3. Because AMD is the controlling party Intel will inevitably roll with their own version eventually. Even if it's based on TressFX it will likely diverge to work with their hardware better. If you haven't paid attention to the open source communities... this is how most projects tend to go unless the project is in a truly unbiased party. Open Source tends to have poor conflict resolution between parties.

Look at some of the things you posted, their libraries are supported better with their hardware? How can we get mad at NVIdia for using tessellation heavily when their video cards perform better with it?

MIT licenses will allow developers to try out the tech risk free, in the end AMD's tactics are designed to gain developer support while attempting to stay ahead of the competition by being the owner of those "Open Source" projects. AMD has proved if it can retain ownership and authority of a project it will.

Who cares what happened during the development of Mantle. The end result is that Mantle is part of the Khronos group which will be implemented in Vulkan. nVidia will be able to use it. It's over and done with.

When will nVidia allow AMD to view Gameworks DLLs?

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#74  Edited By Xtasy26
Member since 2008 • 5582 Posts

@n64dd said:
@Xtasy26 said:
@hobo120 said:

Using gameworks as a con for a hit in performance is such a desperate attempt to make AMD look better. amd cards dont handle tessellation as well as nvidia newer cards. nvidia older cards not perfoming as well is because of the same reason as amd's. amd and nvidia do the same things in the market promoting bias effects,software, api etc. amd's mantle was not free and only amd used it. nvidia using its gameworks is no different. and just today nvidia released new drivers for this game.

Problem is they are over tessellating with little to no improvements in visual quality. As explained in the initial video and by AMD's Richard Huddy.

All that matters is the end result.

Which is putting more load on GPU then it needs to be to produce the same effect harming wide swaths of GPUs. That's Brilliant!

Avatar image for dxmcat
dxmcat

3385

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#75 dxmcat
Member since 2007 • 3385 Posts

Uh last I checked, competitive means you play to win.

Not play and let the other guy score some goals so you keep playing into 300 overtimes.

Avatar image for mordeaniis
Mordeaniis

161

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 5

#76  Edited By Mordeaniis
Member since 2014 • 161 Posts

Nice single game you use to make your point.
From an account with an AMD profile picture. Anyone who takes this seriously is just another fanboy.

Avatar image for GhoX
GhoX

6267

Forum Posts

0

Wiki Points

0

Followers

Reviews: 26

User Lists: 0

#77 GhoX
Member since 2006 • 6267 Posts

Nvidia is no angel, that much is obvious. At the end of the day, it's just corporate being corporate, AMD included.

I will ultimately still buy the superior product, not the product with some superior background.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#78 Xtasy26
Member since 2008 • 5582 Posts

@mordeaniis said:

Nice single game you use to make your point.

From an account with an AMD profile picture. Anyone who takes this seriously is just another fanboy.

Having a profile pic means nothing. Besides I could post an overall benchmark from previous generations where performance has increased comparatively ie R9 290X now faster than 780 Ti.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#79  Edited By 04dcarraher
Member since 2004 • 23829 Posts
@Xtasy26 said:
@mordeaniis said:

Nice single game you use to make your point.

From an account with an AMD profile picture. Anyone who takes this seriously is just another fanboy.

Having a profile pic means nothing. Besides I could post an overall benchmark from previous generations where performance has increased comparatively ie R9 290X now faster than 780 Ti.

I would hope that 290x would perform better than 780ti since its only "Big Kepler chip", it took long time for AMD to mature their drivers for their gpus.

290x isnt that much faster overall, on average at 1440p it tends to be under 5% and virtually no difference at 1080p...... Even with games like Witcher 3 using tessellation, while Kepler shows its weaker tessellation performance but its right on the heels of 290 by a frame or two. Multiple games like BF4 780ti beats 290x while other 290x beats 780ti, both trade blows.... So its really not clear cut that 290x is always faster than 780ti even with "new drivers"

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#80 Xtasy26
Member since 2008 • 5582 Posts

@04dcarraher said:
@Xtasy26 said:
@mordeaniis said:

Nice single game you use to make your point.

From an account with an AMD profile picture. Anyone who takes this seriously is just another fanboy.

Having a profile pic means nothing. Besides I could post an overall benchmark from previous generations where performance has increased comparatively ie R9 290X now faster than 780 Ti.

I would hope that 290x would perform better than 780ti since its only "Big Kepler chip", it took long time for AMD to mature their drivers for their gpus.

290x isnt that much faster overall, on average at 1440p it tends to be under 5% and virtually no difference at 1080p...... Even with games like Witcher 3 using tessellation, while Kepler shows its weaker tessellation performance but its right on the heels of 290 by a frame or two. Multiple games like BF4 780ti beats 290x while other 290x beats 780ti, both trade blows.... So its really not clear cut that 290x is always faster than 780ti even with "new drivers"

You again! Last time I posted in 2015 before I went out for New Year's Eve you said one of the most cringe worthy and idiotic post on PC & A/V Hardware section that I have ever seen claiming that a HD 5970 is more powerful than a HD 7870 when in fact on average where games support multi-GPU configurations the HD 5970 would beat a HD 7950 as pointed out by TechPowerUp's graph. Yuck.

But anyways, that was the entire point of the video that the author of the video posted as it get's better over time. But not many people will realize that as most mainstream sites aren't going to go back and benchmark those games again with newer drivers. And when Pascal comes out nVidia owners will not see how much a performance improvements that Radeon GPU's have improved as opposed to when it launched (when those GPU's may be losing to nVidia's), they will only compare it their current GPU's which may show a significant performance gap say as to Pascal without realizing that the GPU's that they currently own has been surpassed by equivalent Radeon GPU's when in some cases the nVidia users paid more. ie. the 780 Ti cost $700 where as the R9 290X cost $550 and you got a free copy of Battlefield 4 back in 2013.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#81  Edited By 04dcarraher
Member since 2004 • 23829 Posts

Your whole argument back in 2015 was flawed to begin with.....

Claiming that pure gflops ratings show whole picture of performance.... when in fact raw numbers mean squat when you use same standards. ie you cant compare flop performance from one architecture vs another and always expect the one with higher rating to performance better.

7870 is more efficient and faster per watt and kills 5970 in multiple cases. Around 75% higher fps in arkham city over double FPS in Skyrim, Around 95% better PassMark direct compute score, More than 3x better CLBenchmark raytrace score, More than 20% higher crysis: warhead framerate.

Fact is that 7870 can beat 6970 in multiple cases and yet you think that 5970 aka two 5870's that are under clocked with issues of older crossfire is going to as strong as you think? and all the time? Especially with modern DX11 games? And only having 1gb of vram as well? Also lets not forget AMD massive driver update in early 2013 boosting GCN performance to the point allowing 7970 to finally beat the GTX 670/GTX 680 consistently at higher resolutions.

Even against a 6990 a 7870 provides 71% of framerate in BF3 1080p ultra 4xaa.

You really have a one track mind that excludes seeing both sides of the coin....

Nvidia and AMD release drivers all the time improving performance and fixing issues, AMD has been using same base architecture since 2012, while Nvidia has been doing fresh new ones every other series.

Also the cycles between each company tend to over lap and at times one gets a new series out before the other charging a premium until there is direct competition. 780ti was a cut down Titian Black.

AMD had to start 290x at $550 price to be competitive since Nvidia had gpu's comparable to it and faster than their fastest previous gpu.

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#82 clyde46
Member since 2005 • 49061 Posts

Wasn't the 780Ti clocked higher than the Titan Black?

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#83  Edited By Xtasy26
Member since 2008 • 5582 Posts

@04dcarraher said:

Your whole argument back in 2016 was flawed to begin with.....

Claiming that pure gflops ratings show whole picture of performance.... when in fact raw numbers mean squat when you use same standards. ie you cant compare flop performance from one architecture vs another and always expect the one with higher rating to performance better.

7870 is more efficient and faster per watt and kills 5970 in multiple cases. Around 75% higher fps in arkham city over double FPS in Skyrim, Around 95% better PassMark direct compute score, More than 3x better CLBenchmark raytrace score, More than 20% higher crysis: warhead framerate.

Fact is that 7870 can beat 6970 in multiple cases and yet you think that 5970 aka two 5870's that are under clocked with issues of older crossfire is going to as strong as you think? and all the time? Especially with modern DX11 games? And only having 1gb of vram as well? Also lets not forget AMD massive driver update in early 2013 boosting GCN performance to the point allowing 7970 to finally beat the GTX 670/GTX 680

You really have a one track mind that excludes seeing both sides of the coin....

Nvidia and AMD release drivers all the time improving performance and fixing issues, AMD has been using same base architecture since 2012, while Nvidia has been doing fresh new ones every other series.

Also the cycles between each company tend to over lap and at times one gets a new series out before the other charging a premium until there is direct competition. 780ti was a cut down overclocked Titian Black.

AMD had to start 290x at $550 price to be competitive since Nvidia had gpu's comparable to it and faster than their fastest previous gpu.

No, it was not flawed. I was comparing strictly teraflop performance. This is no different than when comparing supercomputers which has different petaflops numbers even though they may use different architecture. One may be Cray based Supercomputers or other Chinese based custom Supercomputers. Doesn't mean that one supercomputer will always beat the other in certain applications.

And no one is saying that the 5970 will beat the 7870 all the time. Facepalm. It all depends contingent on support of drivers and game and application support of mult-gpus. Even after when the HD 7950 was released in 2012, the 5970 was still faster than the HD 7950 as shown by techpowerup in 2012! Almost 3 years AFTER the HD 5970 was released! And of course the HD 5970 will consume more power as it was a dual gpu and on an older node. Anyone who knows about GPUs would know that a dual GPU card and especially one on an older GPU will draw more power. The HD 5970 was 40nm while the HD 7870 was on 28nm. That's common sense.

As for the 780 Ti it came AFTER the R9 290X and it cost's $700 a full $150 more and had 1GB less than a R9 290X with a BF4 code. Which means people who brought the 780 Ti still got screwed. And the R9 290X was getting comparable performance to the original Titan which cost's a ludicrous $1000. Which means not only the 780 Ti owners got screwed the Titan owners got screwed even more by nVidia!

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#84 EducatingU_PCMR
Member since 2013 • 1581 Posts

A better example would be 7970/280x vs 680/770, the last two got left in the dust, especially with 2GB VRAM, lol NVIDIA

I wouldn't be surprised if the 390 starts crushing the 980 in a couple of months.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#85  Edited By 04dcarraher
Member since 2004 • 23829 Posts

@clyde46 said:

Wasn't the 780Ti clocked higher than the Titan Black?

reference models no. While companies produced 780ti's with their coolers were clocked 150-200mhz more on average.

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#86 clyde46
Member since 2005 • 49061 Posts

@04dcarraher said:
@clyde46 said:

Wasn't the 780Ti clocked higher than the Titan Black?

reference models no. While companies produced 780ti's with their coolers were clocked 150-200mhz more on average.

I was getting confused. It was the 780 that was clocked higher than the original Titan.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#87  Edited By 04dcarraher
Member since 2004 • 23829 Posts
@EducatingU_PCMR said:

A better example would be 7970/280x vs 680/770, the last two got left in the dust, especially with 2GB VRAM, lol NVIDIA

I wouldn't be surprised if the 390 starts crushing the 980 in a couple of months.

GTX 670680 creamed the 7970 until AMD got their drivers in order in early 2013. Even with the 2gb, 7970 series really didnt leave 680/770 in the dust. Even at 1440p 280x is only 3-5% faster on average.

And 390 with 8gb wont surpass 980 since it does not have enough processing power to make use of the 8gb correctly. even at 4k 390 PCS+ falls short behind GTX980 4gb, Even when buffer uses more than 4gb.

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#88  Edited By EducatingU_PCMR
Member since 2013 • 1581 Posts

@04dcarraher said:
@EducatingU_PCMR said:

A better example would be 7970/280x vs 680/770, the last two got left in the dust, especially with 2GB VRAM, lol NVIDIA

I wouldn't be surprised if the 390 starts crushing the 980 in a couple of months.

GTX 670680 creamed the 7970 until AMD got their drivers in order in early 2013. Even with the 2gb, 7970 series really didnt leave 680/770 in the dust. Even at 1440p 280x is only 3-5% faster on average.

And 390 with 8gb wont surpass 980 since it does not have enough processing power to make use of the 8gb correctly. even at 4k 390 PCS+ falls short behind GTX980 4gb.

Yes, because the 7970 came first. You're acting as if late performance increases are cheating. AMD built GCN with this new paradigm of compute and multithreading in mind, that's why it took them longer to optimize the architecture.

And that's exactly the point here, discussing how these cards matured over time. NVIDIA likes to go cheap with the VRAM and their "high end" 680 paid the price, the card also stopped receiving optimizations which we know NVIDIA cards need because they are gimped. Look the 270x(7870) is just 4% slower than the 770, lol ridiculous.

The 7970/680 are 1080p cards, so I don't know why would you bring 1440p, 280x is currently 12% faster than the 770, that's nice for a card that got "creamed" at first instance.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#89  Edited By 04dcarraher
Member since 2004 • 23829 Posts

Again your argument is flawed and moving goal posts again..... you were comparing Titan X 980ti vs Fury X.

Again GFLOP/TFLOP ratings dont show whole picture of performance.... when in fact raw TFLOP numbers mean squat when you use same standards. ie you cant compare flop performance from one architecturevs another. Which is why older and other architectures vs newer ones were brought up.

Even 7870 is able to supply more than 70% performance vs a 6990 which is a gpu with 2x TFLOP rating, and is a newer architecture than 5970....

Or even compare a 5870 2.2 TFLOP gpu vs 2.5 TFLOP 7870 and yet 7870 gets more than double the fps in DX11 games.

Again in 2012 is not a good example to test games with 7950 vs 5970 since vast majority games back then were still lacking DX11 base.

Avatar image for deactivated-5a8875b6c648f
deactivated-5a8875b6c648f

954

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#90 deactivated-5a8875b6c648f
Member since 2015 • 954 Posts

an r9 370 almost performing the same as a gtx 780 ti. Seems odd.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#91  Edited By 04dcarraher
Member since 2004 • 23829 Posts

@EducatingU_PCMR:

I just wanted to point out that pure brute force and having more vram does not always means it is or can be faster.... But yes 7970 ie GCN is the more advanced architecture compared to Kepler. Also performance depends on the games feature usage. Back in late 2014 when 7970/680/770 were still considered high end were neck and neck at 1080/1440p. And into latter half of 2015 when sites started using more modern games using more tessellation and other effects the architecture limits and non focused driver support affected kelper based gpus. Once Nvidia updated drivers to fix Kepler tessellation issue, 770 was faster than vanilla 7970 but 3-4 fps behind 280x.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#92  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:
@Xtasy26 said:
@mordeaniis said:

Nice single game you use to make your point.

From an account with an AMD profile picture. Anyone who takes this seriously is just another fanboy.

Having a profile pic means nothing. Besides I could post an overall benchmark from previous generations where performance has increased comparatively ie R9 290X now faster than 780 Ti.

I would hope that 290x would perform better than 780ti since its only "Big Kepler chip", it took long time for AMD to mature their drivers for their gpus.

290x isnt that much faster overall, on average at 1440p it tends to be under 5% and virtually no difference at 1080p...... Even with games like Witcher 3 using tessellation, while Kepler shows its weaker tessellation performance but its right on the heels of 290 by a frame or two. Multiple games like BF4 780ti beats 290x while other 290x beats 780ti, both trade blows.... So its really not clear cut that 290x is always faster than 780ti even with "new drivers"

You haven't removed older 3D game engines.

Dated Aug 2015, http://www.gamersnexus.net/hwreviews/2050-msi-radeon-r9-390x-gaming-review-and-benchmark?showall=1

Notice I'm selecting recent patch updated NVIDIA Gameworks tiles. At 4K resolution, the performance is GPU bound.

With lower resolutions, DX12 shifts performance bound component towards the GPU.

Relative to R9-290X, NVIDIA GTX 780 Ti is not aging well. Recent patched NVIDIA Gameworks tiles are worst on NVIDIA Kelper than on AMD GCN.

It's a good thing Sony didn't select NVIDIA Kelper for their PS4 since it would be another NVIDIA RSX/Geforce 7 aging GPU situation but without updated IBM CELL with similar FLOPS to "match and patch" it's GPU partner.

7870 GE/R9-270X getting very close to GTX 770 is LOL situation.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#93  Edited By ronvalencia
Member since 2008 • 29612 Posts

@waahahah said:

@ronvalencia

Alot of what you posted either doesn't matter or is incorrect. AMD never intended for an OPEN api structure but a "Public" api. It's their words. Even with MIT license behind NVidia can't just change mantle to suite their needs with fundamentally undermining mantle being a standard.

The facts are, AMD did not allow participation from nvidia/intel in making mantle. This completely negates any PR bullshit from AMD stating they are pro open standards...

MIT licenses aren't bad but creating standards... they don't matter. Intel/NVidia might be free to use TressFX but they are still not free relying on AMD to accept changes they want. AMD can easily develop TressFX behind closed doors without NVida/Intel input. It means that AMD can force other vendors to be behind the curve. NVidia/Intel can make their own versions of TressFX to add features but now we are again undermining a libraries purpose to help developers target 1 backend not 3. Because AMD is the controlling party Intel will inevitably roll with their own version eventually. Even if it's based on TressFX it will likely diverge to work with their hardware better. If you haven't paid attention to the open source communities... this is how most projects tend to go unless the project is in a truly unbiased party. Open Source tends to have poor conflict resolution between parties.

Look at some of the things you posted, their libraries are supported better with their hardware? How can we get mad at NVIdia for using tessellation heavily when their video cards perform better with it?

MIT licenses will allow developers to try out the tech risk free, in the end AMD's tactics are designed to gain developer support while attempting to stay ahead of the competition by being the owner of those "Open Source" projects. AMD has proved if it can retain ownership and authority of a project it will.

Again, such arguments has nothing to do with being open and transparency.

Does NVIDIA Gameworks source code license enable the developer to sub-license/show to 3rd party entities? The answer is NO.

Does MIT licensed source code enable the developer to sub-license/show to 3rd party entities? The answer is YES.

For source code access, does NVIDIA Gameworks source code cost non-discriminatory? The answer is NO. NVIDIA has stated "case by case" for source code access cost.

For source code access, does MIT licensed source code non-discriminatory? The answer is YES.

Furthermore, MIT license doesn't restrict 3rd party source code modificationand it's r edistribution.

https://en.wikipedia.org/wiki/MIT_License

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

The Simplified BSD license used by FreeBSD is essentially identical to the MIT License.

From https://technet.microsoft.com/en-us/library/dn486827.aspx

Microsoft has use FreeBSD and MIT license FOSS products into it's Windows products.

Portions of this software are based in part on the work of Massachusetts Institute of Technology. Because Microsoft has included the Massachusetts Institute of Technology software in this product, Microsoft is required to include the following text that accompanied such software:

Copyright 1989, 1990 by the Massachusetts Institute of Technology. All Rights Reserved.

Export of this software from the United States of America may require a specific license from the United States Government. It is the responsibility of any person or organization contemplating export to obtain such a license before exporting.

WITHIN THAT CONSTRAINT, permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of M.I.T. not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. M.I.T. makes no representations about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty.

...

Portions of this software are based in part on FreeBSD. Because Microsoft has included the FreeBSD software in this product, Microsoft is required to include the following text that accompanied such software:

All of the documentation and software included in the 4.4BSD and 4.4BSD-Lite Releases is copyrighted by The Regents of the University of California.

Copyright 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994 The Regents of the University of California. All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.

I use MIT license FOSS products with my close source products and go ahead commence legal action against me e.g. Expat XML library. Let's end this argument with a proper legal action. I'll await your legal action.

Your post is incorrect.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#94  Edited By Xtasy26
Member since 2008 • 5582 Posts

@04dcarraher said:

Again your argument is flawed and moving goal posts again..... you were comparing Titan X 980ti vs Fury X.

Again GFLOP/TFLOP ratings dont show whole picture of performance.... when in fact raw TFLOP numbers mean squat when you use same standards. ie you cant compare flop performance from one architecturevs another. Which is why older and other architectures vs newer ones were brought up.

Even 7870 is able to supply more than 70% performance vs a 6990 which is a gpu with 2x TFLOP rating, and is a newer architecture than 5970....

Or even compare a 5870 2.2 TFLOP gpu vs 2.5 TFLOP 7870 and yet 7870 gets more than double the fps in DX11 games.

Again in 2012 is not a good example to test games with 7950 vs 5970 since vast majority games back then were still lacking DX11 base.

Again we are talking about strictly teraflop performance. No, one is arguing architecture. Different supercomputers use different architectures. That' won't change the fact that certain applications may run better. Certain games may favor AMD while other may favor nVidia.

Same thing when you factor cards that use multi-gpu it will lose when games that doesn't have proper driver support. The 6990 will beat the HD 7870 by a even bigger margin than a HD 5970. The 5970 is already beating the HD 7950. The graph below shows the 6990 crushing the HD 7870 by 25%+ more on average. Heck it even beats the stock HD 7970 (which is not a surprise). Things will get worse and worse for the HD 7870. 2012 is perfectly fine for DX 11 benches that techpowerup used. Heck the HD 5870 was the first DX 11 card. This came 3 years after the first DX 11 game was released with Dirt 2 in 2009. Techpowerup used many DX 11 games in their HD 7970 review such as BF3, Dirt 3, Crysis 2, Metro 2033 the list goes on and on. Here is the relative performance of HD 7870 vs HD 6990:

Don't make me bring out the benches for the 11.5 Teraflop monster known as the R9 295X2. It might make your head explode!

Avatar image for mrbojangles25
mrbojangles25

58305

Forum Posts

0

Wiki Points

0

Followers

Reviews: 11

User Lists: 0

#95 mrbojangles25
Member since 2005 • 58305 Posts

@flyincloud1116 said:

So should I go AMD?

they're both good; you cannot pick a wrong brand, only a wrong class of card.

Avatar image for EducatingU_PCMR
EducatingU_PCMR

1581

Forum Posts

0

Wiki Points

0

Followers

Reviews: 5

User Lists: 0

#96  Edited By EducatingU_PCMR
Member since 2013 • 1581 Posts

@04dcarraher:

The tessellation fix only helps in cases of ridiculous use of the feature like hairworks in TW3, which BTW is more costly and looks like crap compared with PureHair. The card will always be inferior, even in NVIDIA sponsored games, latest examples:

Feel sorry for Gimpler owners

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#97 04dcarraher
Member since 2004 • 23829 Posts

@Xtasy26:

Wrong, TFLOPS performance is not interchangeable with the gpu performance from different architectures. Just because a gpu has say 10 Tflops does not mean that another gpu with different and or newer architecture with only 9 tflops will perform worse. lol your missing the point 7870 is providing 73% the performance of a 6990 which has 100% more TFLOP performance. Which is the point im making if 7870 is able to provide 70%+ of 6990..... 6990 is more than 30% faster than 5970 which means more times than not 7870 will beat 5970 in DX11 games.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#98  Edited By Xtasy26
Member since 2008 • 5582 Posts

@04dcarraher said:

@Xtasy26:

Wrong, TFLOPS performance is not interchangeable with the gpu performance from different architectures. Just because a gpu has say 10 Tflops does not mean that another gpu with different and or newer architecture with only 9 tflops will perform worse. lol your missing the point 7870 is providing 73% the performance of a 6990 which has 100% more TFLOP performance. Which is the point im making if 7870 is able to provide 70%+ of 6990..... 6990 is more than 30% faster than 5970 which means more times than not 7870 will beat 5970 in DX11 games.

What! I never said that different architectures will perform the same. Facepalm. You were claiming that HD 7870 will run slower than a HD 5970 (which was not the case on average) and against with HD 6990 it even performed worse. And who says just because it has more teraflop performance that it will scale linearly. Let alone a dual GPU configuration that doesn't also scale linearly. Facepalm.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#99 04dcarraher
Member since 2004 • 23829 Posts

lol "Raw TFLOPS is raw TFLOPS" again means nothing in claiming most powerful gpu when efficiency beats quantity when comparing different architectures.

Avatar image for Xtasy26
Xtasy26

5582

Forum Posts

0

Wiki Points

0

Followers

Reviews: 53

User Lists: 0

#100 Xtasy26
Member since 2008 • 5582 Posts

@04dcarraher said:

lol "Raw TFLOPS is raw TFLOPS" again means nothing in claiming most powerful gpu when efficiency beats quantity when comparing different architectures.

That doesn't change the fact that it still has more teraflops. Heck, I could argue that other supercomputers are more efficient than some at the end of the day the top most supercomputer has the most petaflops. And where in the world it claimed that it is the most powerful GPU. It was referring to the most powerful computer chip and consumer computer chip at that. And that is 100% true if you go by the raw teraflops count.