Marfoo's forum posts

#1 Posted by Marfoo (5994 posts) -

Hey everyone, just moved from the AMD camp to the Nvidia camp with a shiny new GTX 970. I have a couple questions. With AMD I was able to select what antialiasing mode I wanted per game, whether is be MSAA, SSAA, or getting fancy with EQAA/CFAA or a combination of the two.

Nvidia seems to give me control over MSAA samples and transparency samples, which is fine, but as far as I understand there is no way to specify a more specific filter like CSAA or even SSAA without using a tool like Nvidia inspector.

I want to play Skyrim 4x SSAA. Now, I am achieving this now by using DSR at 4k and the visual quality is good, but not as good as a traditional rotated sample SSAA filter. The other downside is my steam overlay is now super tiny because it thinks my screen is 4k, and I actually use the overlay a lot and it's unusable using DSR for SSAA.

So, can anyone step me through enabling SSAA through Nvidia Inspector? I played with my Elder Scrolls V profile but didn't seem to get it to work.

#2 Posted by Marfoo (5994 posts) -

@Coseniath: For the first part, it was assumed no die shrink involved. You can add more transistors if they're not sucking up too much power. ie, lower bandwidth (clock speed), wider architecture like Nvidia. As for the second part, we were basically talking about the same thing when I said AMD trades higher bandwidth for smaller die and higher yield where Nvidia uses low bandwidth, larger die, lower yield, more power efficiency.

So we're basically talking about the same things. :)

#3 Posted by Marfoo (5994 posts) -

I have the Define R4. I only use the front two intake fans, the bottom intake and the only exhaust is the rear. I prefer to run a positive pressure setup. I have all Noctua fans and the machine is whisper quiet. The GPUs rev up when gaming (they're stock blower style). But if you're worried about the front grill limiting intake, I get plenty of airflow with my Noctuas which prioritize silence over airflow anyway. To keep it quiet you'll want to run it like I have it. Using the side vent and top vents will make the system noticeably louder, and honestly you don't need too many fans to achieve good cooling anyway. Most gaming cases are overkill in terms of number of fans and the returns diminish quickly. Of course, if you have the H100i, you'll need the top vents for the radiator. But overall, I love the case, the build quality is fantastic and they include a lot of little details, extra parts and tools that make it an excellent value and premium buy. Super happy with my purchase. If you'd like I can share some pics of my setup if you're interested.

#4 Posted by Marfoo (5994 posts) -

The best advantage I see to having power efficiency besides mobile supremacy is being able to scale up the number of transistors more easily without having to worry about the power ceiling. If you look at Hawaii vs GK110, they're pretty close in performance. Hawaii has something like 800mil - 1bil less transistors than GK110 and it's definitely a power hog in comparison. Do you think AMD could have bulked on even more hardware had they not hit the power ceiling? What if GCN had been a little more efficient and they were able to get that comfy 250W at a similar transistor count to Nvidia? It's hard to say what's harder to do, increase parallelism at lower bandwidth and fight yield problems, or get better yields with a smaller die but increase bandwidth and in the process, increase overall TDP. All I know is that these engineers have some very tough problems to tackle, and it's interesting that Nvidia and AMD have different takes on the problem. I'm interested to see what both parties bring to the table.

#5 Edited by Marfoo (5994 posts) -
#6 Posted by Marfoo (5994 posts) -

I really hope they do release it for PC. I mean, all the consoles are x86 now and if they're porting to Xbox One and PS4 it's a no-brainer to build the PC version as well. With how similar Xbox One and PC development is there is no excuse for a poor port like GTA IV was.

#7 Posted by Marfoo (5994 posts) -

I think this can be an excellent product based on your needs. I would have loved something like this during my engineering undergrad. I was constantly going between class, the lab and meetings. The one tap to OneNote would have been awesome to take down quick notes from a classmate, in lab, or during meetings, plus the high resolution makes it easy to get work done, I can attach a keyboard when I need it. Plus full Windows and fast Intel processor, can run all my engineering applications no sweat. Oh well, that time has passed. Cool product if you ask me. You can definitely get a powerful laptop for this price range but for me the money is worth it for its flexibility.

#8 Posted by Marfoo (5994 posts) -

My take on overclocking is if your hardware at stock configuration is not running your games up to your satisfaction, overclocking can sometimes tap into that extra headroom that will take you where you need to go and no cost. As long as you don't jack up the voltages too high and make sure temps are safe the risk of damage extremely low. So yes, it's absolutely worth it if you want help your hardware really stretch out its value. Some products come with tons of OC headroom baked in and are a bargain. Pick it up cheap and then clock it to the performance of something you should have paid more for.

It does take some time to make sure things are stable though, always make sure to do rigorous stability testing.

#9 Posted by Marfoo (5994 posts) -

Nvidia's Maxwell was nearly redesigned from the ground up for maximum power savings, I'm not sure they've added a whole lot feature wise though. Their current Maxwell offering hasn't gone down to 20nm yet and it matches Kepler with something like 50% of the power usage. Keep in mind this is still at 28nm! That is pretty phenomenal, and knowing Nvidia likes to build humongous GPUs, the more they scale without having to fight power limits is a good thing. Not much info from the AMD camp although they are releasing a new 28nm die that is also supposed to cut down on the power consumption. We'll see what happens.

#10 Posted by Marfoo (5994 posts) -

@Marfoo said:
@ronvalencia said:
@Marfoo said:

@04dcarraher:

In theory it could work with any GPU. Mantle is a streamlined, multi-core optimized rendering pipeline that replaces DirectX or OpenGL. It allows the developer to send more data to the GPUs for rendering by eliminating overhead and taking advantage of parallelism. However, it's also a low level API, which means all rendering specific commands are specific to an architecture, they would have to rewrite that portion of the code for every architecture out there, so in this case, they're only supporting GCN, their latest.

It could work with VLIW4, VLIW5 and even Nvidia's architecture, but rendering would have to be written for each architecture individually.

Perhaps Microsoft will catch on and update DirectX to be more like Mantle in future revisions such that it's more efficient with multiple cores and reduce overhead but still keep the high level functionality that allows rendering instructions to be universal. You'll always get more out of your GPU using low-level APIs, but it means you have to write your engine for every architecture.

VLIW4 and VLIW5 doesn't have multiple ACE units and GK110 has HyperQ.

What is the significance of these hardware specifics to Mantle outside of architecture specific rendering instructions?

Xbox One has multiple ACE support.

Read http://www.eurogamer.net/articles/digitalfoundry-microsoft-to-unlock-more-gpu-power-for-xbox-one-developers

In terms of GPU API features, Mantle should NOT have less features than Xbox One.

Which is why I'm saying the feature set of Mantle (besides the driver/GPU interfacing part) would have to be re-written for every architecture. That doesn't mean that the part that compiles and feeds the GPU, which is more streamlined, can't work with any card. But the feature set would change architecture to architecture, which is a given with low-level APIs.