PCI-Express Performance Scaling

This topic is locked from further discussion.

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#1  Edited By Coseniath
Member since 2004 • 3183 Posts

I know that a lot of people asking about this. Asking if their PCIE3 x8 or PCI2 x16 would be able to run the "X" card.

This is an awesome article from TechpowerUp that is done with the fastest single GPU atm.

GeForce GTX 980 PCI-Express Scaling

These are the results of the article:

Although there will be games that will benefit more from a faster PCIE, the general idea is that even an old PCIE 1.1 x16 can use such a powerful GPU with only 6% loss at 1080p.

:)

Avatar image for gerygo
GeryGo

12803

Forum Posts

0

Wiki Points

0

Followers

Reviews: 6

User Lists: 0

#2 GeryGo  Moderator
Member since 2006 • 12803 Posts

It's not much of a difference but yes If you go with top end GPU loosing those 5% of performance is critical;

Although not sure who will be spending a 500$ on a GPU but still game on 5 year old MOBO.

Avatar image for JigglyWiggly_
JigglyWiggly_

24625

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#3 JigglyWiggly_
Member since 2009 • 24625 Posts

You could just adjust the bclk a bit to get the remaining speed if you are on pci express 2.0 x8

Avatar image for Coseniath
Coseniath

3183

Forum Posts

0

Wiki Points

0

Followers

Reviews: 1

User Lists: 0

#4  Edited By Coseniath
Member since 2004 • 3183 Posts
@PredatorRules said:

It's not much of a difference but yes If you go with top end GPU loosing those 5% of performance is critical;

Although not sure who will be spending a 500$ on a GPU but still game on 5 year old MOBO.

First of all, all AM3 mobos have PCIE 2.0.

Some of them they might want to SLI lets say 2x 760 (or two R9 280). This article clearly shows that there will be no problem even at PCIE2 x8.

Other people with older machines, they are thinking of buying GTX750ti or an R9 270 in order to delay their major upgrade a little more...

The GTX980 was overkill but it was necessary in order to see the fact that these minor performance changes going up to the highest single GPU performer.

I even remember the first Sandybridge mobos, had only PCIE2.

I don't see what's wrong for people to go with lets say SLI GTX970 when they have i7 2600K.

Now they can see that the PCIE2 x8 will not limit the performance so much.

Avatar image for GTR12
GTR12

13490

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#5 GTR12
Member since 2006 • 13490 Posts

Someone want to explain why at 4k the difference is only 12% between 3.0 and 1.1 x4 and why the difference is larger at 1080p?

Wouldn't a person just use DSR to render it at 4k and display at 1080p? the performance gap would be smaller.

Avatar image for JigglyWiggly_
JigglyWiggly_

24625

Forum Posts

0

Wiki Points

0

Followers

Reviews: 4

User Lists: 0

#6  Edited By JigglyWiggly_
Member since 2009 • 24625 Posts

@GTR12 said:

Someone want to explain why at 4k the difference is only 12% between 3.0 and 1.1 x4 and why the difference is larger at 1080p?

Wouldn't a person just use DSR to render it at 4k and display at 1080p? the performance gap would be smaller.

More frames to render from the cpu at 1080p which requires more bandwidth.

Avatar image for RyviusARC
RyviusARC

5708

Forum Posts

0

Wiki Points

0

Followers

Reviews: 0

User Lists: 0

#7 RyviusARC
Member since 2011 • 5708 Posts

I think a 1% difference can be written off.

There is no difference between x16 2.0 and x16 3.0 or even x8 3.0.

This is because current video cards do not require such speeds.

It's like moving from a 3 lane highway to a 4 lane highway when there are only 3 cars using it.

Give it a few years and there will be some difference.

The last time I noticed a jump was going from AGP x8 to PCI-E x16.