You're comparing two different architectures - there have been two new architectures since the days of the 580 so you can't compare specs like clock speed like that any more than you can compare the clock speed of a Haswell CPU vs a Westmere CPU. Doesn't work like that.
You are totally right!!There are different architectures...and yes i can compare them! Because the new architecture has worse performance comparing with the old architecture.....isn´t that strange to you!?
It´s Like a socket 775 Quad Core better than a second generation i5 with the same clock frequencies,cache,etc,etc.....!
![]()
I´m talking about performance, Frames per second not power consumption......and i´m comparing with the 580,not the 480.....!!!lolwut
the 750 when pushed is equal to a 480gtx BUT it only draws 60w,unlike the 230+ of the 480.How is that NOT an improvement??
I´m talking about performance, Frames per second not power consumption......and i´m comparing with the 580,not the 480.....!!!![]()
and how much was the 580 brand new?
compared to how much a 750 is brand new today?
when a 880 comes out then have a go at the architecture, the fact a card has the same cuda as a high end card 2 generations ago proves a point. dont compare the 750 to a 580 as one is a gaming card the other is more a media card and maybe entry level gaming at best and draws only a quarter of the power.
First of all, CUDA cores aren't for gaming. CUDA is used for GPU Compute tasks and whatnot. Comparing the gaming performance of cards by how many CUDA cores they have isn't the way to go about it.
Secondly, no, you can't directly compare different architectures clock for clock. Maxwell runs at a significantly higher clockspeed all the time by default, but that's normal for it. Downclocking it to a baseline load of the older cards of course makes it seem worse.
I think you're getting confused by the graphic that nVidia have given. Ignore the Processor clock speed, it's the Graphics clock speed that you're comparing with the 750. Ergo, Maxwell runs faster with no qualms.
Edit: Lol, if every 580 could run at 1.5GHz stock...we'd still be using them today![]()
the 580 was an improved version of the 480(a refresh basically),they were still Fermi,that is the point I was making.
The 750 is an entry level card
the 580 was a high end card
Big difference
Because you can not compare architectures like for like.
If the GTX 850 comes along running at 5GHz stock and blows everything away by 40% we're gonna be impressed by the performance, no? The new architecture would mean that the 5GHz clockspeed would be perfectly normal. The graphical prowess is no less impressive just because it's gone about it by increasing its clockspeed so much.
So, it stands to reason that if you took this hypothetical 850 and underclocked it to the clockspeed of a GTX580 it would perform terribly. By your reasoning this means that the Fermi architecture would be superior to the new one.
Fermi's got double clock speed on it's Cuda Cores. So : "Core clock" in it means ROPs and TMUs (and few other things). Cuda Cores got 2x that (ie. Shader clock).ok guys!! maybe i´m not explaining myself very well..... i´m just talking and comparing performance.....cuda cores and frequencies! Not comparing the other features!![]()
Fermi's got double clock speed on it's Cuda Cores. So : "Core clock" in it means ROPs and TMUs (and few other things). Cuda Cores got 2x that (ie. Shader clock).
Now :
In Kepler Nvidia decided to go with wider Cuda core's but with "normal" clock speed. Think : Pentium 4(D) vs. Conroe philosophy.
That's why U can't just compare value from "Cuda Cores x Frequency" equation in two different architectures.
On top of that, You are trying to compare Fermi to Maxwell :
Maxwell is Kepler, just more energy efficient (at least for the time being).
U want performance (ie. FPS) improvement from Maxwell ?
Wait for 20nm chips.
As for performance :
Memory bandwidth (that's Frequency x Bus width), TMUs, ROPs and Cuda Cores : All come together to offer performance.
If a "GPU" is lacking in one department, other GPU can (and will) be faster.
Newer dosen't always mean Better.
GTX 580 was top of the line for single GPU.
GTX 750 Ti is (for now) lowest Maxwell chip available and it sit's between GTX 650 Ti and GTX 660 (performance wise).
Last thing : Marketing is NOT logical.
So expecting new GTX x60 to be as fast as GTX x80 one/two generations ago is... pointless.
Thats why casual people need reviewers like Tom, to tell them why/where different products are when put againts each other.
Hope I clear things out a bit![]()
Clock speeds are comparable... 1ghz 580 or 1ghz 750 is the same speed... speed won't change; however, the architecture's are different. This in turn means far different core logic/core handling/ driver improvements and changes/ other newer hardware on the board. So really clock for clock it comes down to the previous things i mentioned before. Though depending on the settings the bit bus will also play a role in performance, but with overall core/driver efficiency being improved the need is less needed.
Though just bear in mind even if it is that ever bit slower.. it still consumes next to nothing and stays cool as ice compared to a mega power consumer and very hot chip.