DirectX 12 Can Combine Nvidia and AMD Cards

Microsoft poised to announce breakthrough feature at GDC, GameSpot understands; GeForce and Radeon cards can unite under same PC and combine VRAM.

Please use a html5 video capable browser to watch videos.
This video has an invalid file format.
Sorry, but you can't access this content!
Please enter your date of birth to view this video

By clicking 'enter', you agree to GameSpot's
Terms of Use and Privacy Policy

Microsoft could be on the verge of a graphics card breakthrough with the arrival of DirectX 12, as the new API will allow PC users to combine GPUs from different manufacturers.

Presently, PC users who want to double the number of graphics cards attached to their motherboard are restricted by the manufacturer. So, two Nvidia GeForce cards of the same type would work via SLI, and two AMD Radeon cards can unite via Crossfire, but these cannot be mixed and matched.

However, Microsoft is preparing a major announcement at the Games Developers Conference, where it is expected to explain that DX12 can combine all the different graphics resources in a system and treat them as though they were a single card.

The rumour first emerged on Tom's Hardware earlier this week. A source connected to the matter, who asked not to be named, has since explained to GameSpot that the feature is genuine.

Microsoft is already preparing to bring many of its Xbox One games to PC
Microsoft is already preparing to bring many of its Xbox One games to PC

Key to the new process is how DirectX 12 will bind multiple GPUs together. According to Tom's Hardware, the tech then "treats the entire graphics subsystem as a single, more powerful graphics card. Thus, users get the robustness of a running a single GPU, but with multiple graphics cards."

Such a breakthrough could bring about new levels of convenience for PC enthusiasts and developers alike. For the first time, it will also mean that multiple GPUs can pool their memory. In theory, this means that installing two 2GB GPUs into a system will get the end user a useable 4GB of memory, unlike the current system, which would only give a user 2GB of memory.

Tom's Hardware notes that the API includes a "frame rendering method called SFR, which stands for Split Frame Rendering."

It explains: "Developers will be able to manually, or automatically, divide the texture and geometry data between the GPUs, and all of the GPUs can then work together to work on each frame. Each GPU will then work on a specific portion of the screen, with the number of portions being equivalent to the number of GPUs installed."

AMD's Mantle API, which similarly sits much closer to the hardware than DirectX 11, already allows for this feature, with Civilization: Beyond Earth being one of the most recent games to make use of it. Speaking to GameSpot, AMD's Game Scientist Richard Huddy confirmed that it's "possible" this technology can significantly reduce latency and allow for the pooling of memory, but emphasised that the onus is on developers to make use of it.

Microsoft first announced DirectX 12 in January during its Windows 10 media briefing. At the time, it surprised onlookers by suggesting that new graphics cards may not be necessary to take advantage of the API. This dual-GPU feature appears to explain why older cards could perform capably.

Got a news tip or want to contact us directly? Email

Join the conversation
There are 858 comments about this story
858 Comments  RefreshSorted By 
GameSpot has a zero tolerance policy when it comes to toxic conduct in comments. Any abusive, racist, sexist, threatening, bullying, vulgar, and otherwise objectionable behavior will result in moderation and/or account termination. Please keep your discussion civil.

Avatar image for kiddcyr

Is this a hint that Xbox one is full feature api and hardware set?

Avatar image for yellow_jacket

Sorry to come across as stupid but does this mean i may in the future my system may be able to take advantage of the spare 650ti i have laying around?

That would really help my gtx 970 in the Vram area giving me effectively 5gb of vram and maybe a tad more performance (waits for the 4.5gb comments.........)

Avatar image for alex33x

<< LINK REMOVED >> I would like to know this question as well, I have an alienware area 51 with triple 980's but I have a brand new 290x with 8gb of vram, I wonder i I'll be able to combine 2 980's with the 290x.

Avatar image for gaveroid

Here's my question; if you happen to get an AMD and Nvidia card, say a 970 and 270x, and play a game like, I dunno, GTA IV, which doesn't have DX12 support, how will that work? If it's part of the DirectX 12 update how would you use it on older games that are still performance hungry (or badly optimized, cough-GTA4-cough)? Surely they won't say "If it doesn't have DX12, no mixing and matching brands!" - why invest in such a thing if only the newest games will use it?

Avatar image for Kanus_oq_Seruna

Meanwhile, Apple is trying to find new ways to to make hardware inefficient.

Avatar image for gord0nd

Would this remove the need for sli or crossfire? I have a gtx970, but my mobo is crossfire only. So as of now, i could not just throw in another gtx970 as my mobo does not support sli. With DX12, would you actually need a sli/crossfire hybrid board to mix gpu? Or does sli/crossfire certification become a moot point.

In short, could i use 2 gtx970 on a crossfire nonSLI board with dx12?

Avatar image for jon30rockaholic


I am in the same exact situation as you. It would be awesome if I could do SLI on my Crossfire mobo with DX12, but I highly doubt that it would work.

Avatar image for hildegain

Can't wait for Win10.

I've been using a dual gpu system for some time and I would love to be able to finally get past all of the bugs and drawbacks to using my system fully for once.

It would also be nice if this would be able to resolve issues I have with recording on a multi-gpu environment.

Avatar image for themidgethotel

In many games, SLI and Crossfire don't even work properly, or at all, and these are the solutions created by the graphics card manufacturers for their own cards which they 100% understand the hardware behind.

Meanwhile, Direct3D is a HAL built on top of graphics cards drivers. It's likely not going to be as revolutionary or work as effectively as bare-metal solutions. Still, sharing VRAM is a plus, and there might be some benefits outside of that for viewing multiple cards as one. But don't get too hyped up on the performance until you've seen it in action.

Avatar image for ishsgames

<< LINK REMOVED >> Direct3D 12 will be based off mantle :)

Avatar image for Ripper_TV

Am I the only one to think DirectX 10 should've been the one with all of these groundbreaking features? Intstead it brought NOTHING and was an incredibly disappointing piece of trash.

And now out of the blue the 12th version does it!

Avatar image for hildegain

<< LINK REMOVED >> Sounds like someone doesn't understand 3D graphics software and API development AT ALL.

Directx10 allowed for deferred procedures (such as lighting) to be done on the GPU separate from other processes AND allowed for fullscreen anti-aliasing during that fact.

Directx10 used a new shader model and display driver model. SM4.0

Directx10 also allowed for other post processing techniques to be queued by the GPU and save valuable CPU cycles. In other words, DX10 moved a lot of the computation to the GPU in order to allow games to take more advantage of the power of a GPU as opposed to a slower CPU.

Directx11 added the features of GPU accelerated Tessellation, Shader model 5.0. This was supposed to address some issues with transparency anti-aliasing technologies.

Directx12 (which works on most DX11 cards) is set to add better support for multi-gpu environment and asynchronous parallel processing technologies.

Your statement of "why wasn't this in DX10" is akin to saying "why didn't my old Pentium 3 run as good as my i7!?". It makes you look like a complete and total retard. If you actually took a second to think about the inane stupidity that you posted BEFORE you posted it, you would have clearly not said anything at all!

Technology development is a science, not a magic act! Piss off.

Avatar image for hildegain

<< LINK REMOVED >> Adding on to what I was saying: (editing expired, seriously what a stupid system Gamespot)

If you said back in 2008 that we could make a technology that will allow two cards of different make and manufacturer to work together and share Vram, they'd tell you that it's not possible! They'd laugh at you.

And your first reactionary thought was "why did it take them til 12 to do this!?" you un-appreciative prick!

I don't know how people like you are even capable of blinking and breathing at the same time! It's clear that you have absolutely NO COHERENT THOUGHTS AT ALL! If you would actually take the time to read the articles about the DX releases, then you would not have called DX10 useless at all!

Much of the good that DX10 brought, you are now enjoying as a part of DX11 when developers release a nicely optimized game that can run for your expensive GPU without maxing your CPU and hogging all of the performance!

Wow! You are effing stupid!!!

Avatar image for alex33x

Now saying PC = Master Race is not just a statement. Now I feel dumb for buying and area 51 with triple GTX 980's. I wish I had waited to see what combinations become available later.

Avatar image for FreeCredits

Wow she has a long neck! Anyway I'm excited and can finally play AC:U or games with more than 50K drawcalls. Bbye low draw distance/ pop-ins!

Avatar image for Warfighter_971

at the end of the day, all we want is NO FPS CAPPING.

Avatar image for kingdudman_xbox

It would be great if AMD could include the 5000 and 6000 series graphics cards with DirectX 12. Right now I am feeling screwed with my crossfire Radeon 6970s. NVidia is including all their old DirectX 11 graphics cards. I guess I will have to start buying NVidia, because I will know that they will still support their old graphics cards.

Avatar image for karnis

<< LINK REMOVED >> it's 2015 those cards are too old

Avatar image for hildegain

<< LINK REMOVED >> You aren't missing anything DX12 has to offer with a 5000 and 6000 series card because it doesn't support the Async processing that DX12 has to offer. It's kind of like buying a gas powered car and complaining about new more efficient fuel cells that allow electric cars to travel farther!

AMD started to support the technologies that would become the staple of DX12 with their GCN architecture. It means that AMD is actually going to have the advantage when it comes to DX12 performance.

Nvidia on the other hand.... Well not all of their DX11 cards will support 12... I think the 400 series wont and the 500 series will see little to no improvments outside of SLI for DX12 at all.

So you're still going to need a more modern GPU on either side to enjoy any of the benefits of DX12 anyway. If you choose to buy an Nvidia over such an uneducated assumption, then you might want to re-think your purchase and allow someone else to do the market research for you.

As << LINK REMOVED >> said "It's not like your old card would be able to take advantage of some of these new features anyway. ". Precisely! Even if a card as old as an Nvidia 560ti (my GF has this card) technically supports DX12 right out of the box, it doesn't mean that it's necessarily going to fair anywhere near as well as the 7000 series that are in a good position to fully benefit from the changes in DX12.

It might be worth waiting for Nvidia's next GPUs and AMD's next lineup to decide which camp to go for. As for those of us on modern AMD GPUs though, we'll be waiting to see how much we get out of DX12.

Avatar image for themidgethotel

<< LINK REMOVED >> Sounds to me like you need a better card. It's not like your old card would be able to take advantage of some of these new features anyway. Time to upgrade.

I'd recommend nVidia just for the driver support alone over AMD. AMDs are a bit finicky.

Avatar image for Naxirian

<< LINK REMOVED >> 6970's are years old..... There have been 2 generations and a 3rd about to release since they came out. You can't expect to be 3 generations down the line and still get the latest features. Nvidia are just lucky, AMD did an architecture change with the 7000 series cards onwards.

Avatar image for p4kman88


Avatar image for Tranula

I don't see benefits from being able to intermingle brands unless you want to combine 2 existing cards from 2 computers, but the fact you can stack ram is coo.

Avatar image for Andrex1212

I don't think people realize what this really means....... It's only allowing use of both cards' VRAM if the game allows it. The developer has to incorporate these features to make use of it, so don't expect pretty much any game that's out right now to incorporate this technology.

Avatar image for Wahab_MinSeo

This is Finally Happened 4GB + 4GB Vram = 8GB Really appreciated that!

Thank you so much @lucyjamesgames and Really Like your respectful talk, Have a Nice day! :)

Avatar image for shadymitchy

<< LINK REMOVED >> That's what Im really looking forward to. don't really care about mixing and matching GPUS, I wouldn't recommend doing that even if it was possible but the possibility of being able to double your graphics cards ram is mind blowing!

Avatar image for shadymitchy

I am so damn excited for the next year or so in PC gaming. I recently built a beast of a PC.. MSI gd65 mobo, i7 4790k CPU, 16 GB corsair dominator platinum ram, and 2... count em 2! Zotac AMP omega gtx 980s factory overclocked to 1304MHz... and this thing flies... but once directx12 comes out and if all the stuff weve been hearing ends up being true, its going to push my computer so much further. Cant wait!

Avatar image for erevos_csd

<< LINK REMOVED >> Your parents really spoil you. Maybe because you are adopted.

Avatar image for ratchet3

<< LINK REMOVED >><< LINK REMOVED >> what a complete piece of trash you are...

Avatar image for hildegain

<< LINK REMOVED >><< LINK REMOVED >> Just kill yourself. Most people who own PCs of this level actually worked for it themselves. I've never even heard of a parent who would buy their child such a PC. They're more likely to buy them a crappy console if anything.

Avatar image for ShimmerMan

more gimmicks. Have fun with your microstutter that makes a game running at 100+ FPS feel like 15FPS.

Avatar image for hildegain

<< LINK REMOVED >> As everyone else has said, you haven't had a dual GPU system ever, have you? While true that some games run really poorly on Dual GPU systems, Micro-stutter hasn't been a problem of mine since like 2007...

As for DX12, this is really quite the game changer for Multi GPU systems... so I doubt Microstutter will be an issue. Don't cry just because I'll be able to run my games far more effectively than you could dream.

Avatar image for Naxirian

<< LINK REMOVED >> Clearly you don't have a dual GPU rig and you haven't used one recently.... enjoy your 60 fps, I'll enjoy my 120 lol....

Avatar image for darthvulva

<< LINK REMOVED >> I'd hardly call the ability to combine gpus from different manufacturers a gimmick, nor the pooling of vram from separate cards. I'd actually call that one of the biggest breakthroughs in gpu technology for a long time.

Also, AMD and Nvidia have made frame pacing a major part of their drivers for quite a while now. 'Micro stuttering' is something I haven't heard someone moan about for ages. Are you from 2010, sent forward in time to spew BS on gamespot threads??

Avatar image for elessarGObonzo

<< LINK REMOVED >><< LINK REMOVED >> i've heard a lot of people still running AMD 6000s and Nvidia 600s still complain about microstutter but not much from the last 2 series of either brand. the only other times is when games don't have good crossfire or sli support, but reverting back to single card setting usually seems to fix.

most of these types of issues have almost always been user error with setup or with windows.

and besides pooling vram, we get PhysX or TressFX, also DoF and AO for whichever companies technology the game is supporting. there would be a lot of bonuses if this works out the way we are hoping.

i would really hope this would lead us towards a more unified graphics tech in the future where you don't need one brand or the other to run certain effects correctly.

Avatar image for ShimmerMan

<< LINK REMOVED >><< LINK REMOVED >> Go and type "micro stutter" into google and see how many videos/articles and forum posts there are on the issue from 2014. Microstuttering doesn't go away, it's a normal process of how GPUs render visuals, even single GPUs have microstutter to some degree. Just not as bad as two gpus and definitely won't be as bad as AMD+Nvidia, bad idea.

Avatar image for wolfpup7

<< LINK REMOVED >> Sounds about right. Cool idea, but...

Avatar image for wolfpup7

That's completely insane and awesome if somehow true! Considering how hard it is to get two IDENTICAL video cards working together in a meaningful way, I'll believe it when I see it, but geez, imagine if it were true?

Theoretically it could, I suppose, work with Intel's GPUs + an Nvidia GPU in a notebook, do like I don't know, 10 frames on one and 1 on the other or something... I'll believe it when I see it though.

The RAM thing doesn't make sense to me though. I mean that doesn't even sound plausible under the best of circumstances. You're still stuck storing duplicate data on both video cards. I mean even if you have one GPU reading/writing data to the other GPU's video RAM, that's way slower than doing it to its own video RAM. Even if it were technically possible, it wouldn't be useful under most circumstances.

Avatar image for tarantani

<< LINK REMOVED >> Yeah, actually NVidia already had something similar, it was called hybrid SLI, where for example a gtx 280 could work with an onboard nforce 980 or gforce 8300 chipset together. But that combination was unfortunate as all independent graphics cards had so much more power then the onboard chips and were slowed down through that.

For example:

While the gtx 280 rendered almost 60 frames in most demanding games back then, the onboard chip only rendered 5 to 10. Now the gtx 280 had to discard some of its own generated frames to fit in the ones from the onboard chip.

This lead to it, where NVidia dumped that hybrid SLI, since there was no "performance" gain other then having both gpus busy.

See hybrid SLI:


Avatar image for elessarGObonzo

<< LINK REMOVED >><< LINK REMOVED >> AMD had older motherboards with onboard ATI GPUs years ago that would do the same thing with a PCI or AGP GPU. both would run at the slower GPU's speeds but would share memory. didn't work well then either because the onboard GPUs were almost always very slow and not meant for gaming.

what i've heard about DirectX 12 is that when the games have been designed to do so the 2 GPUs will both be working on separate data so there is no memory "overlap". every other frame is rendered by 1 card so they both have separate tasks.

will have to see it work before i get too excited.

Avatar image for wolfpup7

<< LINK REMOVED >><< LINK REMOVED >> Yeah, and that's GPUs from the same company! This is an awesome idea, but I'm just not sure how plausible it is...

Avatar image for tarantani

I can see beyond the horizon blue screens...

*** STOP: 0x0000003B (Bla,bla,bla...
*** atikmdag.sys - Address bla bla bla...

*** STOP: 0x00000116 (Bla,bla,bla...
*** nvlddmkm.sys - Address bla bla bla...

NOW for user convenience combined into:

*** STOP: F*** YOU, it does not work!!!