How much performance would a GTX670 lose in PCI-E 2.0 x4

Mkilbride

New member
I know it will. But see; my issue is, despite having a Huge case, Cooler Master Storm trooper, and a XL-ATX motherboard, in their infinite wisdom, Asrock decided to put the two PCIE 2.0 x 16 slots next to each other. Now I have custom coolers on each that reduce temps quite abit.Hell, my GTX670, with only one card in the case, is @ 1300MHZ, 7200MHZ memory or so...and scoring in lines with a heavily OC'ed 680 according to benchmarks. It is 99% in Uniengine and hitting 40c Load. Yeah. That's 99%, for 5 minutes, not 41c even once. That's damn sexy.

However; my second GPU...blocks the fans of my first...not entirely, about an inch breathing room. Still...not alot. As evidenced is, for the last few months I ran my second GTX670 next to it. Both would hit around 65-69c load...and this is during a cool winter, with a room with good air flow. Now, those are below the throttle point, yes, but not during summer...and they'd crash for me @ 71c, but I think that was because I checked and the Heatsinks had fallen off. So...my first GPU, right now, by itself, hits 40c @ full load, compared to 71c...that's a 21c difference, the lack of air flow makes that big of a change. Wow.

Now, I have two options that are viable. Sort of. I can put my second GTX670 in my third PCI-E 2.0 x16 slot...but for some reason, it only runs @ 4x...when I looked this up...it was because it will run @ 8x, only when in Tri-SLI

So it's 16 x 16 x 8...but if I only run the first PCI-E 2.0 x16 slot, for some reason, my third one is reduced to x4. It runs in this slot, I checked, I lost about 200 ~ or so points in my Uniengine Benchmark test.

I wanna see if there are any other benchmarks that has done this testing. I know I'll lose some, but ugh, poor motherboard design.

Mobo: http://www.newegg.com/Product/Product.aspx?Item=N82E16813157240

Don't laugh. I DID NOT get it because of the Fatality brandmark. I got it because it was on sale, for about 70$ off, and for that price, when I bought it, it was an insane deal. I get alot of people giving me shit about that.

PCI Express 2.0 x163 (PCIE2/PCIE4: Single at x16 (PCIE2) / x8 (PCIE4), or dual at x8 (PCIE2) / x8 (PCIE4); PCIE5: x4 mode)
So my last PCI-E 2.0 runs @ x4, it says. However, I have two PCI slots as well, can tell by the markings what they are. Sadly, one of my PCI-E 2.0 x 16 slots is blocked off by the cooler on my GPU's. I have the Artic Cooler Twin Turbo II. As you can see, they work wonderful, but have left me wanting in this case.

My second option and one that seems simple, but comes with problems of it's own is: A PCI-E 2.0 x16 Riser card. I used one of my sound card abit back and found out about them.

This would give me ample space and air flow for my first card. HOWEVER...then the issue becomes...where do I put my second GPU? I can't just lay it over the PSU...it's own airflow would be blocked...and putting it on the floor would be...sloppy and I might step on it by accident sometime. :P

http://www.amazon.com/HOTER-PCI-E-Express-Riser-Flexible/dp/B0057M16Q8/ref=pd_cp_pc_0

SOmething like that. I used one for my sound card and it was ok.

Where I could put it, I do not know. I'm left in somewhat of a bind.

The x4 slot I could do right away. However, I'd rather see charts about it first and see how much I am really losing overall.

Thank you.
 
Last edited:
If these charts are still relevant, you apparently lose very little performance. However, things might have changed since the times of the 9800GX2. I also think that it really depends on the load. For a 25% load, the x4 slot might be able to handle things, for a higher load it might bottleneck. Lastly, your rig should not crash when the cards hit 71c, pretty sure that's a decent enough temp. http://www.tomshardware.com/reviews/pci-express-2.0,1915-10.html
 
short answer is none. As pci-e 2.1 hasn't even been saturated yet i doubt you'll see any noticeable difference. I'm running 2 7970's on a Z68 board at pci-e 2.1 @8x and they smash anything I throw at it
 
YEs, that's x8

I'm talking about x4. Things have changed since the 9800, 3 generations of GPU's have come since then. Back in the day, they did not matter.

But now? IF it's a 10 FPS difference...second option, I'll find a way, even if it's 5...
 
Last edited:
apologies I didnt read the entire lengthy OP best anser is run them the way you normally do and bench them. Then move the second to the 4x slot and re bench. this will give you the answer you desire. But IIRc running 8x2.0 is equal to 16x3.0 so I would imaging its the same for 4x @3.0 should equal 8x2.0 but only benching will tell you for sure.
 
The best option is to try it for yourself, it's only real way to know for certain. You may be surprised by the results and will settle any nagging doubt you will have with secondary information.

I had a similar quandary a while back. I wondered about the benefits of running my SLI 4gb 670GTX's in either the x16/x16 or x16/x8 slots of the motherboard. The manual of the Gigabyte P67-UD7-B3 even indicated that in such a situation the x16/x16 slots should be used. The problem as with yours OP, meant that the cards were right next to each other and affected their cooling significantly.

So I set about finding out for myself. I used the Heaven benchmark as it's repeatable and I can exactly match the durations of the benchmark and plot them against each other using the MSI Afterburner logs and Excel. Both 670's were at stock and way back then I was using the latest beta driver 304.48.

I logged 4 benchmark runs of Heaven at the following settings and got the scores in bold:

Position 1 - x16/x16, Heaven Standard @ 1080p - 3299
Position 2 - x16/x8, Heaven Standard @ 1080p - 3272

Position 1 - x16/x16, Heaven Maxed @ 1600p - 1341
Position 2 - x16/x8, Heaven Maxed @ 1600p - 1331

Not much of an increase in the scores, and a minimal increase in FPS between the slot changes.

Then looking at the sync'd graphs I noticed a "shift" of sorts between the traces of the slots. The x16 trace ocurrs in time before the trace of the x8. When a new scene was loaded they would be back in sync:

Chart_02.jpg


Quite wierd. I think this is evidence of lag or texture loading lag, which makes sense as the FPS hardly increased; perhaps the wider lanes means more data gets to the cards that little sooner. I never got a satisfactory explaination of it from the forum where I first posted this.

Anyway, I was happy that I wasn't losing a large amount of performance running at x16/x8 and I've been runing x16/x8 ever since.

However, when I eventually watercool the gpu's I'll be running in x16/x16 slots because the temperatures won't be so much of an issue.

My advice to you Mkilbride, is to suck it and see. Find out what your system is like, it's the only way to know for certain.
 
Last edited:
Another insane thread;

NO, there is no diffrence at all between pcie 2.0 x16 compared to x4,

the amount bandwidth used by both x16 and x4 is diffrent but even a x2 would still manage the bandwidth of a modern gpu.

there's so many people that think x16 is twice as better as x8. there is almost no real life performance diffrence.

unless you are running something like quadro 6000 100% load then you will not see even a 1fps diffrence in performance. because there is still more then plenty of bandwidth for it to run at 100% without reaching the limits.
 
Another insane thread;

NO, there is no diffrence at all between pcie 2.0 x16 compared to x4,

the amount bandwidth used by both x16 and x4 is diffrent but even a x2 would still manage the bandwidth of a modern gpu.

there's so many people that think x16 is twice as better as x8. there is almost no real life performance diffrence.

unless you are running something like quadro 6000 100% load then you will not see even a 1fps diffrence in performance. because there is still more then plenty of bandwidth for it to run at 100% without reaching the limits.

Make up your mind. No difference, almost no difference...

There is a difference, however small; whether it be tangible or perceptable is open to debatable though.

Would you suggest someone stick a Titan in to a x4 slot? Of course not, because you'll be starving the gpu data. Once the gpu has that data it'll fly with it. If the gpu doesn't have the data it can introduce texture pop, perhaps frame stutter, (I get a bit of texture pop in Far Cry 3).

There's more to performance than just frames per second I hope you'll agree.
 
Put the cheapest xonar soundcard you can buy in your second pcie2 x16 slot?

edit: unless you wan a nicer one.
 
Make up your mind. No difference, almost no difference...

There is a difference, however small; whether it be tangible or perceptable is open to debatable though.

Would you suggest someone stick a Titan in to a x4 slot? Of course not, because you'll be starving the gpu data. Once the gpu has that data it'll fly with it. If the gpu doesn't have the data it can introduce texture pop, perhaps frame stutter, (I get a bit of texture pop in Far Cry 3).

There's more to performance than just frames per second I hope you'll agree.

I say almost no difference since he just runs benches to confirm 8x to 16x performance which in itself isnt very reliable.

of course i wouldn't recommend a titan on x4 but you said it would bottleneck, which first of all, titan will bottleneck any cpu for now. and yes; as i said unless you are running something extreme like titan or quadro 6000 then there is still lots of bandwidth left for you, with good caching there will also not be any issues with texture pop.

and of course i agree that fps isnt a great comparison of performance since there are a lot of factors that determine those, things like cache speeds, calculations per second of chip, frequency v nm etc.

The point im trying to make is that there is no real life influence on the performance of a card whether it is running a x4 or x16. I'd have to look up some numbers if you want more evidence but you can just do that yourselves.

oh btw, x4 x8 x16 etc. is just the number of lanes that this connection port has, the number of them will not influence the read and writes etc. by having more lanes you just let the gpu have opportunity to do more stuff at the same time but the gpu will not force use of all of them.

did a quick google search: on a 2.0 x16 lane you can cap throughput at around 8gb/s and x4 at 2gb/s. I assure you that your gpu uses far less then that.
 
Well put Pr3d4t0r.

I did my own testing because I read many conflicting reports on the PCIE bus speeds and their alleged benefits. Surely the number being higher means it's better :D.

My graph proved there was hardly any difference in terms of FPS and bench scores, but the shifting graph lines had me puzzled. I ran the tests a few times and it stubbornly persisted. Perhaps if I ran the test again using a x4 slot for one of the gpus it's graph line would be shifted to the right of the others. That's what I thought the OP may experience with moving his other card to a x4 slot.
 
short answer is none. As pci-e 2.1 hasn't even been saturated yet i doubt you'll see any noticeable difference. I'm running 2 7970's on a Z68 board at pci-e 2.1 @8x and they smash anything I throw at it

Hmm. I think at 4x it begins to matter though tbh.

IIRC running Crossfire X in 8x 8x loses about 5% performance. I'm pretty sure I read that dropping to 4x does make quite a difference?

I run SLI at 16x 8x and I get right about the same scores as a full 16x 16x system.
 
Argument is pointless.

Just found out NVIDIA does not support SLI in x4, though AMD allows Crossfire in x4.

I put it in the X4 slot and it didn't work. I looked it up and found that out. =/
 
Argument is pointless.

Just found out NVIDIA does not support SLI in x4, though AMD allows Crossfire in x4.

I put it in the X4 slot and it didn't work. I looked it up and found that out. =/

That is kinda ironic.

Shame, I was going to suggest a test by OC3D plotting the difference of PCIE 2.0 with the permutations of x4, x8 and x16. Hey ho!
 
I know it's not 2.0 x4 but I did test using 7950 x2 comparing 3.0 x16 (single) vs 3.0 x8/8 (CF) vs 2.0 x8/x8 (CF) on 1920x1080 and 5760x1080. Link.

I unfortunately don't have any x16/x16 results to compare it to but some games do show less scaling with 1080p than eyefinity however at the same time they don't really care as much about about 2.0 x8/x8 at 5760x1080.
 
Back
Top