GTX 750 vs GTX 580

JMMP

New member
Ok there is something that confuse me and i need help from you guys!!!
Why the GTX 750 2gb non TI version with the same Nº of cuda and more memory doesn't outperform or equals the GTX 580?! :headscratch:
 

Attachments

  • GTX 750.JPG
    GTX 750.JPG
    84.9 KB · Views: 408
  • GTX 580.jpg
    GTX 580.jpg
    79.6 KB · Views: 290
The GTX 580 was just beastly like the 480, even if they consumed lots of power. The 750 is at a disadvantage with that 128 bit memory bus compared to the 384 on the 580, though someone will probably tell me with GDDR5 that doesn't matter that much. Either way, those cards were much before their time.
 
Last edited:
The 128 bit memory bus matters but not a very large difference! Maybe a frame or two.....!? :p Where i see a big difference is on the GPU clock (1084mhz on GTX 750 vs 1544mhz on the GTX580). But if we overclock the GTX 750 at 1400mhz shouldn´t we expect almost the same performance....?
 
You're comparing two different architectures - there have been two new architectures since the days of the 580 so you can't compare specs like clock speed like that any more than you can compare the clock speed of a Haswell CPU vs a Westmere CPU. Doesn't work like that.
 
You're comparing two different architectures - there have been two new architectures since the days of the 580 so you can't compare specs like clock speed like that any more than you can compare the clock speed of a Haswell CPU vs a Westmere CPU. Doesn't work like that.

You are totally right!! :) There are different architectures...and yes i can compare them! Because the new architecture has worse performance comparing with the old architecture.....isn´t that strange to you!? ;) It´s Like a socket 775 Quad Core better than a second generation i5 with the same clock frequencies,cache,etc,etc.....! :)
 
Last edited:
You are totally right!! :) There are different architectures...and yes i can compare them! Because the new architecture has worse performance comparing with the old architecture.....isn´t that strange to you!? ;) It´s Like a socket 775 Quad Core better than a second generation i5 with the same clock frequencies,cache,etc,etc.....! :)


lolwut

the 750 when pushed is equal to a 480gtx BUT it only draws 60w,unlike the 230+ of the 480.How is that NOT an improvement??
 
lolwut

the 750 when pushed is equal to a 480gtx BUT it only draws 60w,unlike the 230+ of the 480.How is that NOT an improvement??
I´m talking about performance, Frames per second not power consumption......and i´m comparing with the 580,not the 480.....!!! ;)
 
Last edited:
and how much was the 580 brand new?
compared to how much a 750 is brand new today?
when a 880 comes out then have a go at the architecture, the fact a card has the same cuda as a high end card 2 generations ago proves a point. dont compare the 750 to a 580 as one is a gaming card the other is more a media card and maybe entry level gaming at best and draws only a quarter of the power.
 
First of all, CUDA cores aren't for gaming. CUDA is used for GPU Compute tasks and whatnot. Comparing the gaming performance of cards by how many CUDA cores they have isn't the way to go about it.

Secondly, no, you can't directly compare different architectures clock for clock. Maxwell runs at a significantly higher clockspeed all the time by default, but that's normal for it. Downclocking it to a baseline load of the older cards of course makes it seem worse.

I think you're getting confused by the graphic that nVidia have given. Ignore the Processor clock speed, it's the Graphics clock speed that you're comparing with the 750. Ergo, Maxwell runs faster with no qualms.

Edit: Lol, if every 580 could run at 1.5GHz stock...we'd still be using them today :p
 
I´m talking about performance, Frames per second not power consumption......and i´m comparing with the 580,not the 480.....!!! ;)


the 580 was an improved version of the 480(a refresh basically),they were still Fermi,that is the point I was making.

The 750 is an entry level card
the 580 was a high end card

Big difference
 
and how much was the 580 brand new?
compared to how much a 750 is brand new today?
when a 880 comes out then have a go at the architecture, the fact a card has the same cuda as a high end card 2 generations ago proves a point. dont compare the 750 to a 580 as one is a gaming card the other is more a media card and maybe entry level gaming at best and draws only a quarter of the power.

ok guys!! maybe i´m not explaining myself very well..... i´m just talking and comparing performance.....cuda cores and frequencies! Not comparing the other features! :)
 
First of all, CUDA cores aren't for gaming. CUDA is used for GPU Compute tasks and whatnot. Comparing the gaming performance of cards by how many CUDA cores they have isn't the way to go about it.

Secondly, no, you can't directly compare different architectures clock for clock. Maxwell runs at a significantly higher clockspeed all the time by default, but that's normal for it. Downclocking it to a baseline load of the older cards of course makes it seem worse.

I think you're getting confused by the graphic that nVidia have given. Ignore the Processor clock speed, it's the Graphics clock speed that you're comparing with the 750. Ergo, Maxwell runs faster with no qualms.

Edit: Lol, if every 580 could run at 1.5GHz stock...we'd still be using them today :p

You´re right, but i think its legit being confused because it´s strange to me, that a old arquitecture with almost the same numbers ( cuda,memory frequencies and graphic frequencies) achieves better results than a new architecture! Shouldn´t be better by logic? :)


the 580 was an improved version of the 480(a refresh basically),they were still Fermi,that is the point I was making.

The 750 is an entry level card
the 580 was a high end card

Big difference

The 580 was a high end card 2 or 3 years ago , not now compared with new high end cards!


Thats why i´m confused.....why a present entry level card with almost the same numbers in terms of cuda cores, memory and graphic frequencies isn´t better clock per clock and cuda per cuda performance, being a new architecture!? :)
 
Last edited:
Because you can not compare architectures like for like.

If the GTX 850 comes along running at 5GHz stock and blows everything away by 40% we're gonna be impressed by the performance, no? The new architecture would mean that the 5GHz clockspeed would be perfectly normal. The graphical prowess is no less impressive just because it's gone about it by increasing its clockspeed so much.

So, it stands to reason that if you took this hypothetical 850 and underclocked it to the clockspeed of a GTX580 it would perform terribly. By your reasoning this means that the Fermi architecture would be superior to the new one.
 
Because you can not compare architectures like for like.

If the GTX 850 comes along running at 5GHz stock and blows everything away by 40% we're gonna be impressed by the performance, no? The new architecture would mean that the 5GHz clockspeed would be perfectly normal. The graphical prowess is no less impressive just because it's gone about it by increasing its clockspeed so much.

So, it stands to reason that if you took this hypothetical 850 and underclocked it to the clockspeed of a GTX580 it would perform terribly. By your reasoning this means that the Fermi architecture would be superior to the new one.

ok! :) Lets make an example,and lets focus just for performance(Please guys have patience with me!!! :p ) :) If you found a second hand 580 for, lets say 115 GBP did you buy it instead a 110...120 GBP 750 2gb?
 
id go for the 750, its a new card full warranty, plus how many driver updates have been geared towards getting the most out of a 580, and how many have been churned out for the maxwell architecture, once these are even im sure the 750 would start giving much better performance.
 
Fermi vs. Maxwell...

ok guys!! maybe i´m not explaining myself very well..... i´m just talking and comparing performance.....cuda cores and frequencies! Not comparing the other features! :)
Fermi's got double clock speed on it's Cuda Cores. So : "Core clock" in it means ROPs and TMUs (and few other things). Cuda Cores got 2x that (ie. Shader clock).

Now :
In Kepler Nvidia decided to go with wider Cuda core's but with "normal" clock speed. Think : Pentium 4(D) vs. Conroe philosophy.
That's why U can't just compare value from "Cuda Cores x Frequency" equation in two different architectures.

On top of that, You are trying to compare Fermi to Maxwell :
Maxwell is Kepler, just more energy efficient (at least for the time being).
U want performance (ie. FPS) improvement from Maxwell ?
Wait for 20nm chips.

As for performance :
Memory bandwidth (that's Frequency x Bus width), TMUs, ROPs and Cuda Cores : All come together to offer performance.
If a "GPU" is lacking in one department, other GPU can (and will) be faster.
Newer dosen't always mean Better.

GTX 580 was top of the line for single GPU.

GTX 750 Ti is (for now) lowest Maxwell chip available and it sit's between GTX 650 Ti and GTX 660 (performance wise).

Last thing : Marketing is NOT logical.
So expecting new GTX x60 to be as fast as GTX x80 one/two generations ago is... pointless.
Thats why casual people need reviewers like Tom, to tell them why/where different products are when put againts each other.

Hope I clear things out a bit :)
 
Fermi's got double clock speed on it's Cuda Cores. So : "Core clock" in it means ROPs and TMUs (and few other things). Cuda Cores got 2x that (ie. Shader clock).

Now :
In Kepler Nvidia decided to go with wider Cuda core's but with "normal" clock speed. Think : Pentium 4(D) vs. Conroe philosophy.
That's why U can't just compare value from "Cuda Cores x Frequency" equation in two different architectures.

On top of that, You are trying to compare Fermi to Maxwell :
Maxwell is Kepler, just more energy efficient (at least for the time being).
U want performance (ie. FPS) improvement from Maxwell ?
Wait for 20nm chips.

As for performance :
Memory bandwidth (that's Frequency x Bus width), TMUs, ROPs and Cuda Cores : All come together to offer performance.
If a "GPU" is lacking in one department, other GPU can (and will) be faster.
Newer dosen't always mean Better.

GTX 580 was top of the line for single GPU.

GTX 750 Ti is (for now) lowest Maxwell chip available and it sit's between GTX 650 Ti and GTX 660 (performance wise).

Last thing : Marketing is NOT logical.
So expecting new GTX x60 to be as fast as GTX x80 one/two generations ago is... pointless.
Thats why casual people need reviewers like Tom, to tell them why/where different products are when put againts each other.

Hope I clear things out a bit :)

well done buddy. in 5 years time we will end up probs the same guy going i got this card but i could get a titan ... lol ;)
 
Clock speeds are comparable... 1ghz 580 or 1ghz 750 is the same speed... speed won't change; however, the architecture's are different. This in turn means far different core logic/core handling/ driver improvements and changes/ other newer hardware on the board. So really clock for clock it comes down to the previous things i mentioned before. Though depending on the settings the bit bus will also play a role in performance, but with overall core/driver efficiency being improved the need is less needed.

Though just bear in mind even if it is that ever bit slower.. it still consumes next to nothing and stays cool as ice compared to a mega power consumer and very hot chip.
 
Clock speeds are comparable... 1ghz 580 or 1ghz 750 is the same speed... speed won't change; however, the architecture's are different. This in turn means far different core logic/core handling/ driver improvements and changes/ other newer hardware on the board. So really clock for clock it comes down to the previous things i mentioned before. Though depending on the settings the bit bus will also play a role in performance, but with overall core/driver efficiency being improved the need is less needed.

Though just bear in mind even if it is that ever bit slower.. it still consumes next to nothing and stays cool as ice compared to a mega power consumer and very hot chip.

Uh, sure, you can compare their physical clock speeds...but what will it tell you? Just about bugger all in terms of performance.
 
Back
Top