GTX 750 vs GTX 580

Fermi's got double clock speed on it's Cuda Cores. So : "Core clock" in it means ROPs and TMUs (and few other things). Cuda Cores got 2x that (ie. Shader clock).

Now :
In Kepler Nvidia decided to go with wider Cuda core's but with "normal" clock speed. Think : Pentium 4(D) vs. Conroe philosophy.
That's why U can't just compare value from "Cuda Cores x Frequency" equation in two different architectures.

On top of that, You are trying to compare Fermi to Maxwell :
Maxwell is Kepler, just more energy efficient (at least for the time being).
U want performance (ie. FPS) improvement from Maxwell ?
Wait for 20nm chips.

As for performance :
Memory bandwidth (that's Frequency x Bus width), TMUs, ROPs and Cuda Cores : All come together to offer performance.
If a "GPU" is lacking in one department, other GPU can (and will) be faster.
Newer dosen't always mean Better.

GTX 580 was top of the line for single GPU.

GTX 750 Ti is (for now) lowest Maxwell chip available and it sit's between GTX 650 Ti and GTX 660 (performance wise).

Last thing : Marketing is NOT logical.
So expecting new GTX x60 to be as fast as GTX x80 one/two generations ago is... pointless.
Thats why casual people need reviewers like Tom, to tell them why/where different products are when put againts each other.

Hope I clear things out a bit :)

+1 :D Thank you! ;)
 
Back
Top