This morning, AMD’s social media channels teased a new Ve.ga website featuring a countdown to a “Vega architecture preview.” The clock runs out at 9:00 a.m. Eastern time on January 5, the opening day of CES 2017.
Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.
Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.
Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....
Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....
It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.
RX-480 is OK for it's price. GTX 1070 has superior delta memory compression for it's 256bit GDDR5-8000 memory setup.
For AMD Vega...
From http://www.techspot.com/review/1096-star-wars-battlefront-benchmarks/
"We chose the Ewok planet of Endor (Tana) for testing as it was more graphically demanding than scenes that took place on Hoth or Tatooine, for example".
Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.
I would not purchase a 480 myself, so its irrelevant and disappointing that AMD can't compete at the level I care about.
looks good but i'll have to see some unbiased tests on this card by people first. cant trust anything amd shows us with its carefully cherry picked tests to show off their hardware.
looks good but i'll have to see some unbiased tests on this card by people first. cant trust anything amd shows us with its carefully cherry picked tests to show off their hardware.
Well, Hitman DX12 was the worst cherry pick, but RX-480 was fine with Doom 2016 Vulkan, but it couldn't surplus R9-390X.
As for RX-480, any overclock editions will be bounded by effective memory bandwidth.
For reference RX-480
((((256 bit x 8000Mhz) / 8) / 1024) x Polaris's 77.6 percent memory bandwidth efficiency) x Polaris's compression booster 1.36X = 263.84 GB/s
--
Scorpio's "more than 320 GB/s memory bandwidth" claim.
((((384 bit x GDDR5-6900 Mhz) / 8) / 1024) x Polaris's 77.6 percent memory bandwidth efficiency) x Polaris's compression booster 1.36X = 341 .34 GB/s
PS;
((384 bit x GDDR5-6900 Mhz) / 8) / 1024) = 323 GB/s physical memory bandwidth.
((384 bit x GDDR5-7000 Mhz) / 8) / 1024) = 328 GB/s physical memory bandwidth.
Comparison.
The memory bandwidth gap between Fury X and R9-290 = 1.266X (random textures)
With Fury X, it's memory compression is inferior to NVIDIA's Maxwell.
The FLOPS gap between Fury X and R9-290 = 1.48X
The frame rate gap between R9-290X and Fury X is 1.19X.
Random texture memory bandwidth gap's 1.266X factor is closer to frame rate gap's 1.19X.FLOPS gap between R9-290X (5.8 TFLOPS)and Fury X (8.6 TFLOPS) plays very little part with frame rate gap.
With 980 Ti (5.63 TFLOPS), it's superior memory compression enables it to match Fury X's results.
Example of near brain dead Xbox One ports running PC GPUs.
Frame rate difference between 980 Ti and R9-290X is 1.31X with Forza 6 Apex
Effective memory bandwidth between 980 Ti and R9-290X is 1.38X
Forza 6 Apex is another example for effective memory bandwidth influencing the frame rate result.
Conclusion: When there's enough FLOPS for a particular workload, effective memory bandwidth is better prediction method for higher grade GPUs. AMD must address effective memory bandwidth issue.
Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.
I would not purchase a 480 myself, so its irrelevant and disappointing that AMD can't compete at the level I care about.
What level do you care about? The 'sound like an elitist forum douche' level?
480 is not a let down at all, what are you talking about? It easily matches the 1060 6GB in a lot of titles now and DX12/Vulkan is certainly in the 480's favor, plus 2GB more VRAM, its called FineWine by AMD, it'll only get better too and can potentially match 980 TI in long term in DX12/Vulkan games
Don't know what you were expecting with the 480 but the 480 is a great card for the price and is better then the 1060 if given same price for both.
Vega is going to be a noticeably more efficient GPU architecture wise, basically the paper stated TFLOP count will be easily much more achieved in games vs GCN currently. (i.e RX 480 stated 5.8 TFLOPS, but is not producing 5.8 TFLOPS worth of compute power in every frame unless due to current GCN architcture with its CU's, unless game is DX12/Vulkan optimized well, like Doom which is a fantastic example of a game using pretty much full TFLOP count on every frame (efficient full utilization)
-Other SIMD's that remain idle on current GCN can be turned off/powered down a bit and the other active shaders can receieve higher clock boosts too= equal more performance per shader
Situation for current GCN arch shown above is example of current GCN in DX11 games, DX12/Vulkan for current GCN looks more like bottom, Vega will bring big boosts to DX11
AMD CGN's compute is not the major issue..
Fury X's compute is not the major issue...
AMD must address effective memory bandwidth issue...
We’ll suitably round-out our overview of AMD’s Vega teaser with a look at the front and back-ends of the GPU architecture. While AMD has clearly put quite a bit of effort into the shader core, shader engines, and memory, they have not ignored the rasterizers at the front-end or the ROPs at the back-end. In fact this could be one of the most important changes to the architecture from an efficiency standpoint.
Back in August, our pal David Kanter discovered one of the important ingredients of the secret sauce that is NVIDIA’s efficiency optimizations. As it turns out, NVIDIA has been doing tile based rasterization and binning since Maxwell, and that this was likely one of the big reasons Maxwell’s efficiency increased by so much. Though NVIDIA still refuses to comment on the matter, from what we can ascertain, breaking up a scene into tiles has allowed NVIDIA to keep a lot more traffic on-chip, which saves memory bandwidth, but also cuts down on very expensive accesses to VRAM.
For Vega, AMD will be doing something similar. The architecture will add support for what AMD calls the Draw Stream Binning Rasterizer, which true to its name, will give Vega the ability to bin polygons by tile. By doing so, AMD will cut down on the amount of memory accesses by working with smaller tiles that can stay-on chip. This will also allow AMD to do a better job of culling hidden pixels, keeping them from making it to the pixel shaders and consuming resources there.
As we have almost no detail on how AMD or NVIDIA are doing tiling and binning, it’s impossible to say with any degree of certainty just how close their implementations are, so I’ll refrain from any speculation on which might be better. But I’m not going to be too surprised if in the future we find out both implementations are quite similar. The important thing to take away from this right now is that AMD is following a very similar path to where we think NVIDIA captured some of their greatest efficiency gains on Maxwell, and that in turn bodes well for Vega.
Meanwhile, on the ROP side of matters, besides baking in the necessary support for the aforementioned binning technology, AMD is also making one other change to cut down on the amount of data that has to go off-chip to VRAM. AMD has significantly reworked how the ROPs (or as they like to call them, the Render Back-Ends) interact with their L2 cache. Starting with Vega, the ROPs are now clients of the L2 cache rather than the memory controller, allowing them to better and more directly use the relatively spacious L2 cache.
@1080pOnly: I'm not trying to insult people who would buy the 480. It was a disappointment for me because if they competed at the top tier I believe it would drive prices down. This would also make the "elite" level accessible to more people.
@1080pOnly: I'm not trying to insult people who would buy the 480. It was a disappointment for me because if they competed at the top tier I believe it would drive prices down. This would also make the "elite" level accessible to more people.
... Define "elite" level.. Because the mid range 480 and 1060 play games at 1080p ultra settings games pushing upwards to 100fps..
Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....
It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.
AMD cards tend to have significant boosts in performance as times goes on in their life span.. There is a great video about this here:
Loading Video...
I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..
Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....
It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.
AMD cards tend to have significant boosts in performance as times goes on in their life span..
I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..
Only because of the same base gpu architecture they have been using since late 2011 ie GCN. AMD was also working on Mantle API around their GCN architecture. This has allowed AMD to get a head start on the standards that were to be used in future API's. Also a downside you can inherent the issues of the architecture that cant be addressed until a new base design is introduced. Nvidia's new gpu design base every other series is a double edge sword where you see massive improvements but the older architecture with drivers dont get the focus that it should get once a new architecture is introduced.
Proprietary tech for Nvidia is because they can afford to support the software and hardware. AMD would do the same thing if they had the ability. Their open source stance is out of necessity. AMD's last proprietary item was Mantle where only GCN based architecture gpus could use it.
Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....
It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.
AMD cards tend to have significant boosts in performance as times goes on in their life span..
I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..
Only because of the same base gpu architecture they have been using since late 2011 ie GCN. AMD was also working on Mantle API around their GCN architecture. This has allowed AMD to get a head start on the standards that were to be used in future API's. Also a downside you can inherent the issues of the architecture that cant be addressed until a new base design is introduced. Nvidia's new gpu design base every other series is a double edge sword where you see massive improvements but the older architecture with drivers dont get the focus that it should get once a new architecture is introduced.
Proprietary tech for Nvidia is because they can afford to support the software and hardware. AMD would do the same thing if they had the ability. Their open source stance is out of necessity. AMD's last proprietary item was Mantle where only GCN based architecture gpus could use it.
As joint X86 hardware platform holders, both Intel and AMD has major stakes in PCI-E, USB and VESA standards.
Intel supported AMD's Freesync for DP1.2a standards creation in VESA group.
Both Intel and AMD defines what's inside the X86 PC box! It's in the interest for AMD and Intel to continue X86 PC clone hardware industry.
NVIDIA doesn't control X86 PC hardware, hence they attempted to create their own standards.
Again, Mantle API (running MS HLSL/Shader Model 5) has an end game goal of being an open API standard and Vulkan API has reached the end game goal. Mantle API's MS HLSL has to be replaced for Vulkan.
Quoting ex-Nvidia engineer from https://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019
The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.
Non-AMD competitors has access to Mantle API like DirectX12 API road map.
For year 2008., NVIDIA can be recalcitrant with their support for DX10.1 like vendor specific extensions ahead of DX10.1 standard.
The low port effort between DirectX12 APIs and Mantle APIs indicates similarities. My point, AMD has been sharing Mantle with MS.
From https://developer.nvidia.com/dx12-dos-and-donts
On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible – this doesn’t happen anymore under DX12
Nvidia DX11 driver already using key DX12 style speed up methods i.e.
1. asynchronous tasks.
2. multiple threads i.e. more than one threads
AMD catches up with asynchronous tasks and multi-threading features with DX12. Mantle's asynchronous compute wasn't even working until near DirectX12's reveal.
Notice preemptive context switching and page level memory management during year 2007's WDDM 2.x planning stage.
AMD has implemented page level memory management and preemptive context switching with Radeon HD 7970!!!
Hardware VM support for GPUs has been implemented with HWS equipped AMD GCNs version 1.2 and Xbox One.
NVIDIA has no excuses for being late!!!! It's f.ucking mentioned during WinHEC 2007.
With VEGA in the horizon, nvidia collapse should be just around the corner.
I really doubt it. Nvidia will see how VEGA performs then it will decide whether to push Volta out early. I never really cared for AMD video cards anyway. Too hot, power hungry and loud for my tastes. What I really wanted to see from CES was more information on Ryzen! The piss poor Kabylake launch has shown just how much we need AMD to take it to Intel.
With VEGA in the horizon, nvidia collapse should be just around the corner.
I really doubt it. Nvidia will see how VEGA performs then it will decide whether to push Volta out early. I never really cared for AMD video cards anyway. Too hot, power hungry and loud for my tastes. What I really wanted to see from CES was more information on Ryzen! The piss poor Kabylake launch has shown just how much we need AMD to take it to Intel.
They could also just lower the price of pascal and launch the 1080 Ti.
@m3dude1 said:
OP doesnt even know what they mean by 2x peak throughput per clock lmao
@tushar172787: If VEGA fails to live up to the hype then yes Nvidia could just lower the price of Pascal however if VEGA dominates Pascal then Nvidia will move Volta forward. All eyes are on AMD now, they have to make VEGA and Ryzen stick.
I have been gaming on laptop for quite sometime now and it's not that bad with a 1060m. But will finally be able to build me a system I always wanted with Vega GPU and Ryzen CPU. Can't wait.
@clyde46: too hot? That's why you buy a decent AIB card, same thing with Nvidia cards
I like the reference design for the Nvidia cards. My two 980Ti's are Strix's from Asus. My Titan X Pascal is reference from Nvidia, my GTX 1080 is a blower style from EVGA, my 1070 and 1060 are the same blower style from Asus and my 1050 Ti is a aftermarket style from Zotac. All those Nvidia cards, the reference design is superior to what AMD offer.
@tushar172787: If VEGA fails to live up to the hype then yes Nvidia could just lower the price of Pascal however if VEGA dominates Pascal then Nvidia will move Volta forward. All eyes are on AMD now, they have to make VEGA and Ryzen stick.
In terms of physical memory bandwidth, known Vega 10's 512 GB/s is 2X over RX-480's 256 GB/s. Any additional gains near this area over RX-480 2X would be coming from tiling+binning+cache ROPS improvements. Effective memory bandwidth is very important for high TFLOPS GPUs. My main concern is with effective memory bandwidth since it can bound higher TFLOPS.
In terms of TFLOPS, known Vega 10's 12.5 TFLOPS is 2.16X over RX-480's 5.8 TFLOPS. Any additional gains in this area over RX-480 2X would be coming from double rate FP16 feature. Perhaps the new dispatcher yields (minor) additional gain. In general, RX-480's 5.8 TFLOPS is memory bandwidth bound e.g R9-390X's 5.9 TFLOPS has better results over RX-480 at 4K.
In terms of tessellation, known Vega 10's is "more than 2X" over RX-480's version. Perhaps the new dispatcher yields additional gain.
To reach 12.5 TFLOPS with 64 CU, it will need 1.528 Ghz clock speed which is 1.45X over Fury X.
Vega 10 sounds like RX-480 2X with tiling+binning+cache ROPS improvements.
Log in to comment