Is a 2500/2600K *really* enough for a 1070/1080...probably?

Scoob

New member
Hey all,

Yes, another one of those threads...sorta!

It's late, and my brain isn't working at 100% (fairly strong 15% at a guess) and I've just got in (late) after a night out, having left my new GTX 1070 - delivered today - all boxed up, sat on my desk.

Anyway, as I have two PC's, the one in my sig and a second one consisting of a 2600k @ 4.4, GTX 680 (now 1070) 8gb Ram, a pair of fast SSDs in RAID 0 all on W10.

So, as my second PC is a simple air cooled build, I'd planned to pop the 1070 in there and making that my new main gaming PC - until my BIG upgrade in the near-ish future (full 6700k build)

As a quick test, I put my main gamer (2500k @ 4.6ghz, 2x GTX 680 @ 1.2ghz, 32gb Ram....basically my sig PC) up against the 1070 in the 2600k PC. Using Valley at 1080p "Ultra" preset (no AA for max GPU load) here are my numbers:

PC 1 is the main gamer, PC 2 is the 2600K + 1070.

FPS 1: 125.7
FPS 2: 135.7
Score 1: 5261
Score 2: 5678
Min FPS 1: 39.2
Min FPS 2: 39.2
Max FPS 1: 179.6
Max FPS 2: 208.1

Quite remarkable how close a pair of over clocked GTX 680's to a single GTX 1070 eh?

Now "bottleneck" isn't a word I like to use, however I've read lots and lots (and lots) of posts from people saying how Sandy Bridge, when over clocked, is still entitled to be called a "current" CPU, and how newer offerings such as Skylake only offer a little more in terms of gaming performance vs an over clocked Sandy Bridge. The accepted wisdom seemed to be to just get a better GPU.

As I'm doing my upgrade in parts, I thought I'd test this theory. Ok, yeah, I've only run ONE test so far and we know that such benchmarks can really leverage an SLI set up. Plus I'm only at 1080p and the Valley benchmark doesn't use much vRam at all.

Just found my results interesting as an initial test, I expected the 1070 to destroy a pair of 680s in this particular test...unless my older CPU was holding things back.

Checking some stats (GPUz logs), I can see that my 680's were pushed into the high 90's load-wise during the Valley run, as you'd expect. Checking the same stats the the 1070's run, I see a peak GPU load of 98% and the card reaching a healthy stock boost of 1,900mhz) - doesn't look like a GPU being starved of CPU horse power does it?

Anyway, just thought I'd post these interesting first results...I'll be doing more play...erm, testing of course...not tonight, I really should be off to bed!

Cheers,

Scoob.
 
You might want to turn AA on, i just benched with the same settings and the GPU usage was anywhere between 40-80%.
Ivo2IZZ.jpg

GYr2TKD.jpg

The second one is with 8x AA, GPU load was at 99% most of the time there. Still below what Tom got in his benchmarks, he got ~100fps in 1080p with AA and 156fps without AA.
5QMswUi.png

Afterburner readings from a run without AA and a run with AA. Even in the run with AA there are sections where the GPU is at ~70% usage but the CPU usage doesn't spike to bottleneck levels.
I actually had a talk with Wraith and K1lbane about this on Discord this evening and we came to the conclusion that a CPU bottleneck seems unlikely, but since you get slightly better scores than me with a slightly better CPU i suppose it must be that.
 
Last edited:
Hey SeekaX,

Thanks for the reply.

You're right, popping some heavy AA into the mix would make the card work a little harder in some ways. However, personally speaking, I've always found that GPU temps and peak load are generally slightly reduced when using AA in this particular test. As such, when stressing / testing an OC, I always elect to disable AA.

All that being said, a pair over overclocked GTX 680's still have quite a lot of raw power IF they scale well (as many synthetics do) and that all important 2gb vRam isn't exceeded.

Interestingly, just before I finished for the night, I fired up Fallout 4. Now, with the 680's SLI works pretty well, but I have to run at mostly medium settings to keep my vRam usage low - or so I thought.

I fired up FO4 on the 1070, instantly maxing out every setting (as you do) but with motion blur off. Game play felt just as silky smooth as the pair of GTX 680's at their best, yet at much higher settings. I did get the odd slight frame hitch, but I was sprinting along using god modes infinite AP at the time, so it might have been some pure cell loading hitches.

After running around for a good 10 minutes or so, I checked GPUz. The card had hardly broken a sweat, it had rarely gone up to the base clock, remaining in a power-saving under clocked state, no doubt due to the 60fps cap. Nice. More surprisingly, peak vRam use for that 10 minute session was...2gb on the nose lol. So, the extra vRam was not being leveraged in this particular run.

I'll do some more testing tomorrow if I get the time.

Scoob.
 
Hey Scoob,

Interesting results there!

Right now it seems that Skylake is generally better when run with a high end GPU. You may be CPU bound but probably not in your tests, as that is a GPU benchmark. Although, the i7 2600K does seem to do a lot better with the extra threads compared to the 2500K.

For now, Sandy bridge should do just fine in modern titles if overclocked on memory and core.

Check out this video called "Is it Time To Upgrade Your Core i5 2500K" by Digital Foundry for more info!

TL:DR Sandy bridge is fast enough, but Skylake improvements carry the platform further, especially in CPU bound titles, rendering, etc.
 
I'm running an i7 5930K @ 4.0 GHz with a GTX 1080 FE @ 1835 MHz boost, and I'm getting a lower score with ExtremeHD (3920) than you, but better max fps (200.2). I guess it's your overclock on the CPU that gets you the high score.
 
Morning all,

I've got a few more upgrades for my old 2600k System to go in today, then it'll replace my current main Gamer in the short-term. Going to give it the 32gb from the Gamer, and I've two new 500gb 850 EVO's to go in, replacing the pair of 120gb Kingston SSD Now V300's. I'll re-install W10, then get things up and running.

Seems silly having my fully water-cooled gaming machine replaced by one with a much more basic set-up. The 1070 is air-cooled - though it is a Palit GameRock with a very good aftermarket cooler on it. I just have an old Antec AIO for the 2600k, but it works well with just 4.4ghz to cool :)

Once I've transitioned to the temporary system properly, I'll be able to get some more testing done. I think the 2600k will do the business for now.

As an aside, my eventual upgrade to a 6700k isn't so much about pure CPU grunt - good job really - but rather the other generational benefits I'll get. I.e. I'll be PCIe 3.0 rather than 2.0, I'll gain some additional SATA Rev3 ports over the Z68's two, the new DDR4 ram means my bandwidth will double (going from DDR3 1600 to DDR4 3200), I can use a very fast M.2 NVMe drive on the motherboard I've picked (ASUS Hero) and I gain the benefit of an additional 20 PCIe 3.0 lanes on the chipset, as well as more bandwidth to the CPU. So, a decent package of improvements, which should lead to more fluidity in game-play and a healthy up-lift in minimum frame rates from what I've been reading.

If the 6700K can maintain a decent OC of at least 4.6ghz - it should - then it'll make for a decent upgrade at some point...and I get to play around building a machine all over again of course :)

@ Haxorinator: Yep, I've watched that, and many other videos over the past...well, the past period of time since Sandy B. was replaced lol. Each generation I see what people are saying, but only with Skylake has the jump started to seem more worth it - but more with the other benefits rather than just pure CPU IPC as I previously mentioned.

@ tolagarf: Not too shabby for my "ancient" 2600K then! I know this CPU has more OC headroom in it - it was at 5.1ghz for a short time while cooled with a dedicated loop in a friends machine. As my cooling solution was more modest, and the machine was going to be moved around a lot, I stuck with a basic old AIO I had laying around, hence just 4.4ghz.

Right, off to tinker some more.

Scoob.
 
Hi again,

Did my rebuild today, so the 2600k now has a pair of Samsung 850 EVO SSDs in RAID0, 32gb of DDR3 1600 vs. the old 8gb, as well as the 1070and a fresh install of W10, all updated.

I ran the Valley test "Extreme HD" so with 8xAA applied at 1920x1080 rather than my usual 1200p for comparison.

FPS: 90.2
Score: 3774
Min FPS: 22.7
Max FPS 185.4

So, not a bad showing.

GPU usage was often around 98-99% with the odd dip below this during scene transitions as you'd expect.

CPU usage remained very low, usually under 20% However, I noticed one core was being hammered compared to the others...I double-checked Threaded Optimisation in the NV Profile, and this was on by default, so I assume it's just the nature of the Benchmark in this case.

Oh, I tried ARK Survival Evolved, what a transformation! Sure, ARK is still woefully optimised (IMO) but I've gone from all low settings - only a single 680 being utilised in this game of course - to everything as high as it will go. Sure, I don't see the magic 60fps all the time, but 50fps or so is good enough for me! Hadn't quite realised how pretty this game could look either!

A few words on my GPU...

I got the Palit GameRock GTX 1070 because it was the cheapest 1070 I could find (£395) that was actually in stock. For this I got quite an impressive card. Firstly it's pretty huge - there's a lot of metal there for getting rid of heat. It has two near-silent (so far) fans that spin at a fairly low RPM (all default settings thus far) yet, even stressing it, 69c is the highest I've seen. Note it's very hot and humid today, this room is in the 30's.

Card is happily boosting to 1.9ghz near as damnit (1885-1895mhz) and seems to hold that level - I've not even played around with fan profiles yet.

This card has the, now obligatory it seems, RGB LEDs so I can make it glow pretty colours if I desire - a valuable feature for my non-windowed case... ;)

So, all in all quite pleased on several fronts. Firstly my 2600k @ 4.4ghz being able to feed this GPU quite nicely, the build and feature quality of the GPU for the (relative) cheapness of it and of course the performance it offers.

So, to answer my thread original question: YES, the 2600k (and doubtless the 2500k) with an easily achievable overclock, can happily keep a GTX 1070 fed.

Point of note: while my system, sans the GPU, is certainly an old system now, it was never a budget rig. The CPU was of course as good as it got back in its day for quad cores, the motherboard is a Gigabyte Z68X-UD7-B3 and set my friend back close to £400 when it was new. The old SSDs (120gb Kingston SSD Now V300's) were a budget buy, but vs. any spinner they were pretty epic - the new 850 EVO's just build on that and give me 4x the capacity of course.

Anyway, that's my ramble done for now, I'm off to play some games!

Scoob.
 
Played a bit of FO4 this eve with nice, smooth gameplay for the most part. A couple of known areas saw a slight fps drop, but generally very playable at max settings - just 1920x1200 though.

I also tinkered with a bit of overclocking. GPU will sit around 1900mhz after raising the power limit to 110% I did push to just over 2000mhz but it wasn't always stable - though temps never went over 69c in this very very warm room.

I did noticed that despite the tool allowing me to increase the vCore (Palits "Thundermaster" tool lol) this actually had zero effect on the reported VDDC in GPUz - had something similar with my old EVGA GTX 680's, needed a BIOS tweak before the voltages actually unlocked. Not an issue for me, I was just tinkering :)

Scoob.
 
Back
Top