New AMD Radeon VEGA Leak Surfaces, 2x Peak Throughput per Clock, 4x Power Efficiency

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#1  Edited By ronvalencia
Member since 2008 • 29612 Posts

https://www.custompcreview.com/news/new-amd-radeon-vega-leak-surfaces-2x-peak-throughput-per-clock-4x-power-efficiency-8x-capacity-per-stack/35732/

High Bandwidth Cache

  • 4x Power Efficiency
  • Primitive Shaders
  • High Bandwidth Cache Controller
  • 8x Capacity/Stack (Referring to 2nd Generation HBM)
  • 2x Peak Throughput Per Clock
  • Next Generation Computer Engine
  • Draw Stream Binning Rasterizer

Small Vega relates to Scorpio...

http://www.pcworld.com/article/3154006/components-graphics/new-amd-radeon-website-counts-down-to-vega-graphics-card-architecture-preview.html

This morning, AMD’s social media channels teased a new Ve.ga website featuring a countdown to a “Vega architecture preview.” The clock runs out at 9:00 a.m. Eastern time on January 5, the opening day of CES 2017.

Avatar image for ellos
ellos

2532

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#2  Edited By ellos
Member since 2015 • 2532 Posts

So the video is giving warning to Volta. Vega has already passed pascal huh.

Avatar image for R4gn4r0k
R4gn4r0k

46260

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#3 R4gn4r0k
Member since 2004 • 46260 Posts

Cool, the reveal is really soon.

We urgently need some competition again in the high end GPU market.

Avatar image for m3dude1
m3dude1

2334

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#4 m3dude1
Member since 2007 • 2334 Posts

OP doesnt even know what they mean by 2x peak throughput per clock lmao

Avatar image for tormentos
tormentos

33784

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5 tormentos
Member since 2003 • 33784 Posts

After the RX480 let down ill wait and see.

Avatar image for dynamitecop
dynamitecop

6395

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#6 dynamitecop
Member since 2004 • 6395 Posts

@tormentos said:

After the RX480 let down ill wait and see.

How was it a letdown?

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7  Edited By Juub1990
Member since 2013 • 12620 Posts
@dynamitecop said:

How was it a letdown?

Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.

Avatar image for SecretPolice
SecretPolice

44058

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#8 SecretPolice
Member since 2007 • 44058 Posts

Unholy cow, Mighty Scorpio indeed!! :P

Avatar image for dynamitecop
dynamitecop

6395

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#9 dynamitecop
Member since 2004 • 6395 Posts

@Juub1990 said:
@dynamitecop said:

How was it a letdown?

Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.

Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#10  Edited By Juub1990
Member since 2013 • 12620 Posts
@dynamitecop said:

Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....

It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.

Avatar image for Pedro
Pedro

69448

Forum Posts

0

Wiki Points

0

Followers

Reviews: 72

User Lists: 0

#11 Pedro
Member since 2002 • 69448 Posts

@tormentos: That's totally understandable especially with the Pro sporting that disappointment of a card technology.?

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#12 Juub1990
Member since 2013 • 12620 Posts

@Pedro said:

@tormentos: That's totally understandable especially with the Pro sporting that disappointment of a card technology.?

A gimped version at that.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#16  Edited By ronvalencia
Member since 2008 • 29612 Posts

@tormentos said:

After the RX480 let down ill wait and see.

RX-480 is OK for it's price. GTX 1070 has superior delta memory compression for it's 256bit GDDR5-8000 memory setup.

For AMD Vega...

From http://www.techspot.com/review/1096-star-wars-battlefront-benchmarks/

"We chose the Ewok planet of Endor (Tana) for testing as it was more graphically demanding than scenes that took place on Hoth or Tatooine, for example".

It is already trading blows with the Titan XP.

Loading Video...

Avatar image for schu
schu

10191

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#17 schu
Member since 2003 • 10191 Posts

@Juub1990 said:
@dynamitecop said:

How was it a letdown?

Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.

I would not purchase a 480 myself, so its irrelevant and disappointing that AMD can't compete at the level I care about.

Avatar image for Yams1980
Yams1980

2862

Forum Posts

0

Wiki Points

0

Followers

Reviews: 27

User Lists: 0

#18 Yams1980
Member since 2006 • 2862 Posts

looks good but i'll have to see some unbiased tests on this card by people first. cant trust anything amd shows us with its carefully cherry picked tests to show off their hardware.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#19  Edited By ronvalencia
Member since 2008 • 29612 Posts

@Yams1980 said:

looks good but i'll have to see some unbiased tests on this card by people first. cant trust anything amd shows us with its carefully cherry picked tests to show off their hardware.

Well, Hitman DX12 was the worst cherry pick, but RX-480 was fine with Doom 2016 Vulkan, but it couldn't surplus R9-390X.

As for RX-480, any overclock editions will be bounded by effective memory bandwidth.

For reference RX-480

((((256 bit x 8000Mhz) / 8) / 1024) x Polaris's 77.6 percent memory bandwidth efficiency) x Polaris's compression booster 1.36X = 263.84 GB/s

--

Scorpio's "more than 320 GB/s memory bandwidth" claim.

((((384 bit x GDDR5-6900 Mhz) / 8) / 1024) x Polaris's 77.6 percent memory bandwidth efficiency) x Polaris's compression booster 1.36X = 341 .34 GB/s

PS;

((384 bit x GDDR5-6900 Mhz) / 8) / 1024) = 323 GB/s physical memory bandwidth.

((384 bit x GDDR5-7000 Mhz) / 8) / 1024) = 328 GB/s physical memory bandwidth.

Comparison.

The memory bandwidth gap between Fury X and R9-290 = 1.266X (random textures)

With Fury X, it's memory compression is inferior to NVIDIA's Maxwell.

The FLOPS gap between Fury X and R9-290 = 1.48X

The frame rate gap between R9-290X and Fury X is 1.19X.

Random texture memory bandwidth gap's 1.266X factor is closer to frame rate gap's 1.19X.FLOPS gap between R9-290X (5.8 TFLOPS)and Fury X (8.6 TFLOPS) plays very little part with frame rate gap.

With 980 Ti (5.63 TFLOPS), it's superior memory compression enables it to match Fury X's results.

Example of near brain dead Xbox One ports running PC GPUs.

Frame rate difference between 980 Ti and R9-290X is 1.31X with Forza 6 Apex

Effective memory bandwidth between 980 Ti and R9-290X is 1.38X

Forza 6 Apex is another example for effective memory bandwidth influencing the frame rate result.

Conclusion: When there's enough FLOPS for a particular workload, effective memory bandwidth is better prediction method for higher grade GPUs. AMD must address effective memory bandwidth issue.

Avatar image for 1080pOnly
1080pOnly

2216

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#20 1080pOnly
Member since 2009 • 2216 Posts

@schu said:
@Juub1990 said:
@dynamitecop said:

How was it a letdown?

Some crackheads like him believed it would trade blows with a 980 Ti. For anyone else with a semblance of knowledge it's exactly as powerful as expected.

I would not purchase a 480 myself, so its irrelevant and disappointing that AMD can't compete at the level I care about.

What level do you care about? The 'sound like an elitist forum douche' level?

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#21 ronvalencia
Member since 2008 • 29612 Posts

@xboxiphoneps3 said:
@tormentos said:

After the RX480 let down ill wait and see.

480 is not a let down at all, what are you talking about? It easily matches the 1060 6GB in a lot of titles now and DX12/Vulkan is certainly in the 480's favor, plus 2GB more VRAM, its called FineWine by AMD, it'll only get better too and can potentially match 980 TI in long term in DX12/Vulkan games

Don't know what you were expecting with the 480 but the 480 is a great card for the price and is better then the 1060 if given same price for both.

Vega is going to be a noticeably more efficient GPU architecture wise, basically the paper stated TFLOP count will be easily much more achieved in games vs GCN currently. (i.e RX 480 stated 5.8 TFLOPS, but is not producing 5.8 TFLOPS worth of compute power in every frame unless due to current GCN architcture with its CU's, unless game is DX12/Vulkan optimized well, like Doom which is a fantastic example of a game using pretty much full TFLOP count on every frame (efficient full utilization)

-Other SIMD's that remain idle on current GCN can be turned off/powered down a bit and the other active shaders can receieve higher clock boosts too= equal more performance per shader

Situation for current GCN arch shown above is example of current GCN in DX11 games, DX12/Vulkan for current GCN looks more like bottom, Vega will bring big boosts to DX11

AMD CGN's compute is not the major issue..

Fury X's compute is not the major issue...

AMD must address effective memory bandwidth issue...

http://www.anandtech.com/show/11002/the-amd-vega-gpu-architecture-teaser/3

ROPs & Rasterizers: Binning for the Win(ning)

We’ll suitably round-out our overview of AMD’s Vega teaser with a look at the front and back-ends of the GPU architecture. While AMD has clearly put quite a bit of effort into the shader core, shader engines, and memory, they have not ignored the rasterizers at the front-end or the ROPs at the back-end. In fact this could be one of the most important changes to the architecture from an efficiency standpoint.

Back in August, our pal David Kanter discovered one of the important ingredients of the secret sauce that is NVIDIA’s efficiency optimizations. As it turns out, NVIDIA has been doing tile based rasterization and binning since Maxwell, and that this was likely one of the big reasons Maxwell’s efficiency increased by so much. Though NVIDIA still refuses to comment on the matter, from what we can ascertain, breaking up a scene into tiles has allowed NVIDIA to keep a lot more traffic on-chip, which saves memory bandwidth, but also cuts down on very expensive accesses to VRAM.

For Vega, AMD will be doing something similar. The architecture will add support for what AMD calls the Draw Stream Binning Rasterizer, which true to its name, will give Vega the ability to bin polygons by tile. By doing so, AMD will cut down on the amount of memory accesses by working with smaller tiles that can stay-on chip. This will also allow AMD to do a better job of culling hidden pixels, keeping them from making it to the pixel shaders and consuming resources there.

As we have almost no detail on how AMD or NVIDIA are doing tiling and binning, it’s impossible to say with any degree of certainty just how close their implementations are, so I’ll refrain from any speculation on which might be better. But I’m not going to be too surprised if in the future we find out both implementations are quite similar. The important thing to take away from this right now is that AMD is following a very similar path to where we think NVIDIA captured some of their greatest efficiency gains on Maxwell, and that in turn bodes well for Vega.

Meanwhile, on the ROP side of matters, besides baking in the necessary support for the aforementioned binning technology, AMD is also making one other change to cut down on the amount of data that has to go off-chip to VRAM. AMD has significantly reworked how the ROPs (or as they like to call them, the Render Back-Ends) interact with their L2 cache. Starting with Vega, the ROPs are now clients of the L2 cache rather than the memory controller, allowing them to better and more directly use the relatively spacious L2 cache.

Avatar image for Juub1990
Juub1990

12620

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#22  Edited By Juub1990
Member since 2013 • 12620 Posts

@1080pOnly: Maybe 4K at max settings and pusing 30fps+. That's what I care about too. Not really concerned with mid-range either.

Avatar image for 1080pOnly
1080pOnly

2216

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#23 1080pOnly
Member since 2009 • 2216 Posts

@Juub1990 said:

@1080pOnly: Maybe 4K at max settings and pusing 30fps+. That's what I care about too. Not really concerned with mid-range either.

Sure but the 480 was priced and sold as a mid-range card and isn't the most powerful card AMD produce - maybe I just missed the point of that post?

Avatar image for schu
schu

10191

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#24  Edited By schu
Member since 2003 • 10191 Posts

@1080pOnly: I'm not trying to insult people who would buy the 480. It was a disappointment for me because if they competed at the top tier I believe it would drive prices down. This would also make the "elite" level accessible to more people.

Avatar image for deactivated-59d151f079814
deactivated-59d151f079814

47239

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#25 deactivated-59d151f079814
Member since 2003 • 47239 Posts
@schu said:

@1080pOnly: I'm not trying to insult people who would buy the 480. It was a disappointment for me because if they competed at the top tier I believe it would drive prices down. This would also make the "elite" level accessible to more people.

... Define "elite" level.. Because the mid range 480 and 1060 play games at 1080p ultra settings games pushing upwards to 100fps..

Avatar image for deactivated-59d151f079814
deactivated-59d151f079814

47239

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#26 deactivated-59d151f079814
Member since 2003 • 47239 Posts

@Juub1990 said:
@dynamitecop said:

Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....

It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.

AMD cards tend to have significant boosts in performance as times goes on in their life span.. There is a great video about this here:

Loading Video...

I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#27  Edited By 04dcarraher
Member since 2004 • 23829 Posts

@sSubZerOo said:
@Juub1990 said:
@dynamitecop said:

Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....

It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.

AMD cards tend to have significant boosts in performance as times goes on in their life span..

I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..

Only because of the same base gpu architecture they have been using since late 2011 ie GCN. AMD was also working on Mantle API around their GCN architecture. This has allowed AMD to get a head start on the standards that were to be used in future API's. Also a downside you can inherent the issues of the architecture that cant be addressed until a new base design is introduced. Nvidia's new gpu design base every other series is a double edge sword where you see massive improvements but the older architecture with drivers dont get the focus that it should get once a new architecture is introduced.

Proprietary tech for Nvidia is because they can afford to support the software and hardware. AMD would do the same thing if they had the ability. Their open source stance is out of necessity. AMD's last proprietary item was Mantle where only GCN based architecture gpus could use it.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#28  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:
@sSubZerOo said:
@Juub1990 said:
@dynamitecop said:

Yeah, I thought it was pretty common knowledge from the start that it was a budget centric mid-range GPU, I mean wasn't the $200 price a pretty big red flag....

It was. Didn't stop @tormentos for hyping it up as a killer card that would rival the top tiers at the time. We called him crazy and he kept rambling. I for one am very happy with the RX 480's performance. At that price point it's hard to beat. Especially considering it somehow caught up with the 1060.

AMD cards tend to have significant boosts in performance as times goes on in their life span..

I hope amd starts pressuring NVIDIA more.... I do not like Nvidia's stance in forcing tech exclusivity (NVidia 3d, gsync, refusal to have their cards use freesync even though its open source, etc etc)..

Only because of the same base gpu architecture they have been using since late 2011 ie GCN. AMD was also working on Mantle API around their GCN architecture. This has allowed AMD to get a head start on the standards that were to be used in future API's. Also a downside you can inherent the issues of the architecture that cant be addressed until a new base design is introduced. Nvidia's new gpu design base every other series is a double edge sword where you see massive improvements but the older architecture with drivers dont get the focus that it should get once a new architecture is introduced.

Proprietary tech for Nvidia is because they can afford to support the software and hardware. AMD would do the same thing if they had the ability. Their open source stance is out of necessity. AMD's last proprietary item was Mantle where only GCN based architecture gpus could use it.

As joint X86 hardware platform holders, both Intel and AMD has major stakes in PCI-E, USB and VESA standards.

Examples

http://arstechnica.com/gadgets/2010/12/intel-and-amd-sign-death-warrant-for-vga-port/

Controlling VESA group, Intel and AMD signed the death warrant for VGA port.

http://www.forbes.com/sites/jasonevangelho/2015/08/21/intel-throws-its-support-behind-amd-freesync-style-display-technology/#40b709aa66e6

Intel supported AMD's Freesync for DP1.2a standards creation in VESA group.

Both Intel and AMD defines what's inside the X86 PC box! It's in the interest for AMD and Intel to continue X86 PC clone hardware industry.

NVIDIA doesn't control X86 PC hardware, hence they attempted to create their own standards.

Again, Mantle API (running MS HLSL/Shader Model 5) has an end game goal of being an open API standard and Vulkan API has reached the end game goal. Mantle API's MS HLSL has to be replaced for Vulkan.

Quoting ex-Nvidia engineer from https://www.gamedev.net/topic/666419-what-are-your-opinions-on-dx12vulkanmantle/#entry5215019

The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.

Non-AMD competitors has access to Mantle API like DirectX12 API road map.

For year 2008., NVIDIA can be recalcitrant with their support for DX10.1 like vendor specific extensions ahead of DX10.1 standard.

The low port effort between DirectX12 APIs and Mantle APIs indicates similarities. My point, AMD has been sharing Mantle with MS.

From https://developer.nvidia.com/dx12-dos-and-donts

On DX11 the driver does farm off asynchronous tasks to driver worker threads where possible – this doesn’t happen anymore under DX12

Nvidia DX11 driver already using key DX12 style speed up methods i.e.

1. asynchronous tasks.

2. multiple threads i.e. more than one threads

AMD catches up with asynchronous tasks and multi-threading features with DX12. Mantle's asynchronous compute wasn't even working until near DirectX12's reveal.

Notice preemptive context switching and page level memory management during year 2007's WDDM 2.x planning stage.

AMD has implemented page level memory management and preemptive context switching with Radeon HD 7970!!!

Hardware VM support for GPUs has been implemented with HWS equipped AMD GCNs version 1.2 and Xbox One.

NVIDIA has no excuses for being late!!!! It's f.ucking mentioned during WinHEC 2007.

WDDM 2.0 includes with Windows 10.

Avatar image for MK-Professor
MK-Professor

4214

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#29 MK-Professor
Member since 2009 • 4214 Posts

With VEGA in the horizon, nvidia collapse should be just around the corner.

Avatar image for 04dcarraher
04dcarraher

23829

Forum Posts

0

Wiki Points

0

Followers

Reviews: 2

User Lists: 0

#30  Edited By 04dcarraher
Member since 2004 • 23829 Posts

@ronvalencia:

One tid bit that should be added Mantle was originally all proprietary based (AMD's secret weapon) aka not open source.

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#31 clyde46
Member since 2005 • 49061 Posts

@MK-Professor said:

With VEGA in the horizon, nvidia collapse should be just around the corner.

I really doubt it. Nvidia will see how VEGA performs then it will decide whether to push Volta out early. I never really cared for AMD video cards anyway. Too hot, power hungry and loud for my tastes. What I really wanted to see from CES was more information on Ryzen! The piss poor Kabylake launch has shown just how much we need AMD to take it to Intel.

Avatar image for tushar172787
tushar172787

2561

Forum Posts

0

Wiki Points

0

Followers

Reviews: 12

User Lists: 0

#32 tushar172787
Member since 2015 • 2561 Posts

@clyde46 said:
@MK-Professor said:

With VEGA in the horizon, nvidia collapse should be just around the corner.

I really doubt it. Nvidia will see how VEGA performs then it will decide whether to push Volta out early. I never really cared for AMD video cards anyway. Too hot, power hungry and loud for my tastes. What I really wanted to see from CES was more information on Ryzen! The piss poor Kabylake launch has shown just how much we need AMD to take it to Intel.

They could also just lower the price of pascal and launch the 1080 Ti.

@m3dude1 said:

OP doesnt even know what they mean by 2x peak throughput per clock lmao

Yes, please enlighten us with your knowledge!

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#33  Edited By ronvalencia
Member since 2008 • 29612 Posts

@04dcarraher said:

@ronvalencia:

One tid bit that should be added Mantle was originally all proprietary based (AMD's secret weapon) aka not open source.

Windows Vulkan drivers for Intel IGP, AMD GPUs and NVIDIA GPUs are not open source.

According to Oxide, Mantle API didn't provide hit-the-metal AMD GPU Intrinsics access.

The reason for Mantle's initial private access is to avoid OpenGL's politics which slows down API development.

AMD internally develop FreeSync which is later given to VESA standards group.

AMD+EA DICE internally develop Mantle API which is later given to Kronos standards group.

http://www.phoronix.com/scan.php?page=news_item&px=MTgyNTE

MIAOW: An Open-Source GPU Design Based On AMD's Southern Islands GCN

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#34 clyde46
Member since 2005 • 49061 Posts

@tushar172787: If VEGA fails to live up to the hype then yes Nvidia could just lower the price of Pascal however if VEGA dominates Pascal then Nvidia will move Volta forward. All eyes are on AMD now, they have to make VEGA and Ryzen stick.

Avatar image for telefanatic
telefanatic

3008

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#36 telefanatic
Member since 2007 • 3008 Posts

I have been gaming on laptop for quite sometime now and it's not that bad with a 1060m. But will finally be able to build me a system I always wanted with Vega GPU and Ryzen CPU. Can't wait.

Avatar image for clyde46
clyde46

49061

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#37 clyde46
Member since 2005 • 49061 Posts

@xboxiphoneps3 said:

@clyde46: too hot? That's why you buy a decent AIB card, same thing with Nvidia cards

I like the reference design for the Nvidia cards. My two 980Ti's are Strix's from Asus. My Titan X Pascal is reference from Nvidia, my GTX 1080 is a blower style from EVGA, my 1070 and 1060 are the same blower style from Asus and my 1050 Ti is a aftermarket style from Zotac. All those Nvidia cards, the reference design is superior to what AMD offer.

Avatar image for ronvalencia
ronvalencia

29612

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#39  Edited By ronvalencia
Member since 2008 • 29612 Posts

@clyde46 said:

@tushar172787: If VEGA fails to live up to the hype then yes Nvidia could just lower the price of Pascal however if VEGA dominates Pascal then Nvidia will move Volta forward. All eyes are on AMD now, they have to make VEGA and Ryzen stick.

In terms of physical memory bandwidth, known Vega 10's 512 GB/s is 2X over RX-480's 256 GB/s. Any additional gains near this area over RX-480 2X would be coming from tiling+binning+cache ROPS improvements. Effective memory bandwidth is very important for high TFLOPS GPUs. My main concern is with effective memory bandwidth since it can bound higher TFLOPS.

In terms of TFLOPS, known Vega 10's 12.5 TFLOPS is 2.16X over RX-480's 5.8 TFLOPS. Any additional gains in this area over RX-480 2X would be coming from double rate FP16 feature. Perhaps the new dispatcher yields (minor) additional gain. In general, RX-480's 5.8 TFLOPS is memory bandwidth bound e.g R9-390X's 5.9 TFLOPS has better results over RX-480 at 4K.

In terms of tessellation, known Vega 10's is "more than 2X" over RX-480's version. Perhaps the new dispatcher yields additional gain.

To reach 12.5 TFLOPS with 64 CU, it will need 1.528 Ghz clock speed which is 1.45X over Fury X.

Vega 10 sounds like RX-480 2X with tiling+binning+cache ROPS improvements.