OC3D Review Asus GTX590

I think 590 is gear'd to use full PCI express 2.0 x16 bandwidth at full load, tripple monitor resolutions so it doesn't always need to be faster, maby it'll cripple at 3D vision, and enthuiast overclockers, gamers etc would go for single core SLI instead for those reasons and more, their motherboard is designed for it, and as quad SLI is a rare performer in games and these are GeForce cards, 1 590 is ideal for the media X entertainment computer and the motherboards with 1 PCI express 2.0 x16 slot for graphics. Getting a proper enthusiast rig need the motherboard to begin.

I'll be upgrading when I've lived alot of overclocking life on my cards I think. 590 might need PCI frequency OC's to get more from it and I read the review and they didn't get a massive OC on the 590.
 
Glad I went for the 570 now... Have good choices on a future SLI setup that should still beat both dual chip cards.
 
I think 590 is gear'd to use full PCI express 2.0 x16 bandwidth at full load, tripple monitor resolutions so it doesn't always need to be faster, maby it'll cripple at 3D vision, and enthuiast overclockers, gamers etc would go for single core SLI instead for those reasons and more, their motherboard is designed for it, and as quad SLI is a rare performer in games and these are GeForce cards, 1 590 is ideal for the media X entertainment computer and the motherboards with 1 PCI express 2.0 x16 slot for graphics. Getting a proper enthusiast rig need the motherboard to begin.

I'll be upgrading when I've lived alot of overclocking life on my cards I think. 590 might need PCI frequency OC's to get more from it and I read the review and they didn't get a massive OC on the 590.

You're meant to use x16 bandwidth when using a dual core GPU. Problem with this card when it comes to overclocking is it's inability to get more power, if Nvidia decided to add an extra power connector; or prehaps a non-reference card has the added power connector than you'll be able to overclock it to more respectable speeds. Finally I doubt people will buy an Nvidia ahard for triple monitor setups, AMD Eyefinity is more mature, scales better and at high resolutions memory makes a difference and the 6990 has an extra gig of it.
 
To quite honest, I was expecting to be blown out the water with heaps of reasons why it would be superior to the 6990. But with ttl's unbiased video and 3rd party criticism, its really tough(for me) to pick a clear victor. Like everyone else I'm vacillating betwixt the two companies and waiting for non-reference model with after market coolers and what not.(Particularly MSI's Twin Frozr 6990 and 590)

Thanks for another video Tom! Hope the weather is nicer in England, starting to get swamplike down here again in Georgia
tongue.gif


-Gentlemen
 
mm a dual 480 would be bottlenecked slightly at tripple monitor HD resolutions so the 590 is I think gear'd about right for todays PCI express standard. Suiting media, entertainment PC's for the all round PC user with 3 monitors and no spare PCI express slots for single GPU SLI and for power consumption, noise, heat etc. There's alot of users like that, enjoying Blu Ray videos, editing, gaming etc. Film makers might use this card. LAN party gamers want small chassis, this card can do for that. GeForce
 
Tbh all you are really buying is a convenient form of GTX570 SLI. If you want a all-in-one silent GTX 570 SLI package.I believe bothNvidia and AMD have delivered nothing useful for us SLI users because I got41000 P score in vantage with my two 480s with a bit of overclocking.

 
Why is it clocked so low?

Nvidia has a serious power problem with the 590 if they increase the clocks on the 590 the power goes above PCI Express limmits. So overclock at you own risk. Just remember that in this case you are risking you motherboard as well as your graphics card.
 
Well the primary problem with the HD6990 is heat and noise. A quick look at the graphs will show that the HD6990 whips the GTX590 if there is a difference. So Watercooling solving the heat/noise issue it's an easy pick for the HD6990.

But you'd still be better off going GTX570 SLI, assuming your budget wont stretch to GTX580s in SLI
smile.gif

Well no actually, Two 6950 in crossfire will outperform two 570 in SLI because the 69xx cards scale better in dual and triple card configurations. The other advantage of the 6950 is that they are substantially less expensive. if you are worried about noise with these cards get them with an aftermarket cooler they are still cheaper than a pair of 570. If you decide to bios mod them they will be faster than a pair of 580.
 
Nvidia has a serious power problem with the 590 if they increase the clocks on the 590 the power goes above PCI Express limmits. So overclock at you own risk. Just remember that in this case you are risking you motherboard as well as your graphics card.

Nope. Every card with an additional power source plugged into the pcb in addition to the pcie slot feed, is intended to go over the pcie limits.

Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want.

The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build.

Time will tell, just keep an eye on those overclocking records.
 
Nope. Every card with an additional power source plugged into the pcb in addition to the pcie slot feed, is intended to go over the pcie limits.

Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want.

The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build.

Time will tell, just keep an eye on those overclocking records.

Yes you are correct high end graphics cards usualy have additional power input they can be 6 pin 8 pin or both. That said the following is an excerpt from the PCI E 2.0 Spec

A11: PCI-SIG has developed a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG has developed some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The new PCI-SIG specification was completed in 2007.

What that means is that at maximum with the two 8 pin connectors + the 75 watts from the PCIE adapter there is a maximum of 375 watts available under PCIE 2.0. Nvidia says the GeForce GTX 590 is a 365 W board. By the way, there is a reason Nvidia did not add a third 8 pin connector or a forth for that matter, they will not build a card outside of the PCIE 2.0 specification. If they did they would have to warn anyone who installed such a card that they are voiding the warranty of there motherboard. If it were as simple as just adding power ad hoc nvidia probably would have done it so they could run the 590 at higher clock rates.

Sure a manufacturer could put as many power connectors on a board as they want. Doing so would put the product outside the PCIE 2.0 specification so no one would be stupid enough to install the thing. Which is why none of them are doing it, neither Nvidia or AMD have engineers that are so moronic that they would design a board out of spec.

I hope that cleared things up for you.
 
Well no actually, Two 6950 in crossfire will outperform two 570 in SLI because the 69xx cards scale better in dual and triple card configurations. The other advantage of the 6950 is that they are substantially less expensive. if you are worried about noise with these cards get them with an aftermarket cooler they are still cheaper than a pair of 570. If you decide to bios mod them they will be faster than a pair of 580.

You make a fine point.
 
Yes you are correct high end graphics cards usualy have additional power input they can be 6 pin 8 pin or both. That said the following is an excerpt from the PCI E 2.0 Spec

A11: PCI-SIG has developed a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG has developed some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The new PCI-SIG specification was completed in 2007.

What that means is that at maximum with the two 8 pin connectors + the 75 watts from the PCIE adapter there is a maximum of 375 watts available under PCIE 2.0. Nvidia says the GeForce GTX 590 is a 365 W board. By the way, there is a reason Nvidia did not add a third 8 pin connector or a forth for that matter, they will not build a card outside of the PCIE 2.0 specification. If they did they would have to warn anyone who installed such a card that they are voiding the warranty of there motherboard. If it were as simple as just adding power ad hoc nvidia probably would have done it so they could run the 590 at higher clock rates.

Sure a manufacturer could put as many power connectors on a board as they want. Doing so would put the product outside the PCIE 2.0 specification so no one would be stupid enough to install the thing. Which is why none of them are doing it, neither Nvidia or AMD have engineers that are so moronic that they would design a board out of spec.

I hope that cleared things up for you.

Yeah your right about the specification. It's a shame those GF104 chip are so power hungry. Nvidia really had to tune the 590 down to make it work. I was hoping for something better from them. Epic Fail!
 
Thing you need to bear in mind also is that that specification is years old (2007), where the thought of anything coming close to 365w was crazy talk. Even tho there are/were professional cards that make a mockery of that.

Another point is that the PCI-SIG would not specify that the slot AND external power would put forward a limit to the power considerations regarding it. They would ONLY concentrate on the slot itself. All the quoted paper suggest is that "with the addition of the suggested/new" 8 and 6 pin power connectors (i.e. the quoted NEW 2x4pin) - this is at a point in time (2007) when we moved from 4 pin molex as very much a standard to 6 and 8 pin pcie.

Time has moved on, and besides manufacturers of mobos moving onto pcie 2.0a/b/c/etc addendums to the original PCI-SIG submission for the original 2.0, there's nothing preventing the psu manufacturers suggesting a 10x pin pcie power connector. Or to hell with it, here's a 20pin.

Reading the PCI-SIG on 3.0 last week, there is no mention about wattage boundries, only the inference that "they can supply more power" whilst at the same time "will be more efficient" - which can be read a number of ways. Plenty of bandwidth speech.

In effect there is no limit put forward by the 2.0 specification, except for the slot itself and suggestions of what can be added to it, taking into account the technology available at the time of writing.
 
Thing you need to bear in mind also is that that specification is years old (2007), where the thought of anything coming close to 365w was crazy talk. Even tho there are/were professional cards that make a mockery of that.

Another point is that the PCI-SIG would not specify that the slot AND external power would put forward a limit to the power considerations regarding it. They would ONLY concentrate on the slot itself. All the quoted paper suggest is that "with the addition of the suggested/new" 8 and 6 pin power connectors (i.e. the quoted NEW 2x4pin) - this is at a point in time (2007) when we moved from 4 pin molex as very much a standard to 6 and 8 pin pcie.

Time has moved on, and besides manufacturers of mobos moving onto pcie 2.0a/b/c/etc addendums to the original PCI-SIG submission for the original 2.0, there's nothing preventing the psu manufacturers suggesting a 10x pin pcie power connector. Or to hell with it, here's a 20pin.

Reading the PCI-SIG on 3.0 last week, there is no mention about wattage boundries, only the inference that "they can supply more power" whilst at the same time "will be more efficient" - which can be read a number of ways. Plenty of bandwidth speech.

In effect there is no limit put forward by the 2.0 specification, except for the slot itself and suggestions of what can be added to it, taking into account the technology available at the time of writing.

The PCIE 2.0 spec sets the total power standard for High performance PCIE cards. The specification is very clear about the overall power and thermal limit's regardless of power source. It is true that the PCIE 3.0 standard "may" increase some of those power/thermal limits, however the PCIE 3.0 standard is not finalized and is not available on any motherboard available to the public.
 
The PCIE 2.0 spec sets the total power standard for High performance PCIE cards. The specification is very clear about the overall power and thermal limit's regardless of power source. It is true that the PCIE 3.0 standard "may" increase some of those power/thermal limits, however the PCIE 3.0 standard is not finalized and is not available on any motherboard available to the public.

How bizarre, cos you can look at the 150w specification here : http://www.pcisig.com/specifications/pciexpress/graphics/

.. and you can search for anything in addition (if you have membership) and there's nothing to be found. Outside of just 150w (and suggestions on how to achieve up to 365w). It used to have 'suggestions' of how to reach 300w, which are now deleted (or struck through as is the method).

Also you can download the 3.0 base spec plus the to-come 3.1 spec.
 
How bizarre, cos you can look at the 150w specification here : http://www.pcisig.com/specifications/pciexpress/graphics/

.. and you can search for anything in addition (if you have membership) and there's nothing to be found. Outside of just 150w (and suggestions on how to achieve up to 365w). It used to have 'suggestions' of how to reach 300w, which are now deleted (or struck through as is the method).

Also you can download the 3.0 base spec plus the to-come 3.1 spec.

That is 150 watts per 8 pin connectors 2x8pin = 300 watts + 75 watts from the PCIE 2.0 slot = 375 watts. Yes PCIE 2.0 spec includes up to 2x8pin connectors. See 375 watts, as I suggested. Though it wasn't really a suggestion it is the PCIE 2.0 production specification not to be confused with the PCIE 2.0 white paper as they are not the same thing.

The PCIE 3.0 spec was released to the PCI0-SIG partners on November 18, 2010. PCI-SIG expects the PCIe 3.0 specifications to undergo rigorous technical vetting and validation before being released to the public. This process, which was followed in the development of prior generations of the PCIe Base and various form factor specifications, includes the corroboration of the final electrical parameters with data derived from test silicon and other simulations conducted by multiple members of the PCI-SIG.

The PCIE 3.0 final production spec is likely to change as the many PCI-SIG stakeholders produce functioning silicone from the current spec. As a result neither Intel or AMD plan on including PCIE 3.0 on there current chipsets (sandy bridge, bulldozer). Both companies have "suggested" that they don't intend to integrate PCIE 3.0 until late 2012 or 2013. This is early speculation from both companies so those estimates could be substantially delayed.

Now on to the matter of the physics. The main reason there is a 375 watt per slot limit in the PCIE 2.0 spec is because when you put that much power in you need to get that much heat out. Given the space constraints of the form factor the PCI-SIG partners agreed that 375 watts as an upper thermal and electrical limit per slot would be sufficient. The limit can be doubled simply by using 2xPCIE 2.0 slots (crossfire, SLI).

It is unlikely that the PCI-SIG partners will increase the 375 watt limit in the PCIE 3.0 production spec for two main reasons. One the increased production cost is prohibitive and two as the pitch of GPU silicon is reduced the number of transistors that can be included will increase while producing far less heat. That is to say that future GPU's are likely to use less power and produce less heat while improving performance. There simply isn't any need to increase the thermal or electrical profiles.

I have two questions for you Rastalovich

If Nvidia could have simply added another power connector why didn't they?

Is it because they are stupid or because they are smart?
 
They would go along with the PCI-SIG suggestion of what they've ratified in conjunction with what the ATX x.x standard put forward as a method of power supply. i.e. "we've made an 8pin pcie connector" - and PCI-SIG adjust their documents accordingly once it's passed through the ecn.

375w wouldn't be put forward as a limit due to heat disappation within a pc case as they know full well you can put 4x card in xfire/sli within a said case. 8x if you could the productivity servers you can install parallel gpu setups in.

To insist 375w was max and to allow the further addition of pcie x16 electical slots would be silly, don't ya think
wink.gif


EDIT: I have a feeling that maybe you're not seeing my view of how the system works, so I've put together a "brief" explanation in as best laymen's terms as I can:

In the model we're looking at, there are 3 prominent bodies:

ATX

PCI

Graphic card manufacturers

ATX come out with the standards of which power supply manufacturers are suggested to abide by when producing psus for the industry.

PCI have standards that apply to the use of busses, in the main, that are most commonly looked at as slots (even though they can be integrated also).

Graphic card manufacturers obviously produce the cards in our little model that display stuff - basically. AMD, Intel, nVidia, Silicone, Matrox and a few more.

Aside from these 3 there are many groups, with their own standards to which the above three bear in mind when putting forward their own studies/papers/standards, which range from safety people, environmental people, electrical, motherboard and other component people, the list does go on quite a bit.

Between all these groups there is a whole load of interaction, co-operation and studies. As an example, a graphic card manufacturer will come forward wanting to make an oem card that oems can use in their mass produced pcs aimed at business and the public. The oem has told them, as they usually do -very strongly- that they don't want ANY external power connections to this card, but it has to be more powerful than the present integrated/embedded selection. In this case, the gfx people can think of power and look directly at what PCI have put forward. They'll comply with their papers on what they've had motherboard manufacturers in turn comply with.

As time goes by, the consumer market gets more demanding. When PCI came out with their new standard for their gfx slot, they gave everyone the boundries at which the power would/could be used. As the gfx cards advance, their manufacturer saw the easy option of adding an additional molex cable to the side of their cards pcb to go beyond what PCI had stated would be available. Everyone spoke, and PCI added their errata/addendum to their previous paper on power use. It now includes "to achieve the power required for blah blah, a single molex is used" and so on.

Now the PCI's paper will include this addition. The bar has been raised as far as the gfxcard people are concerned, and time continues to move on, advances are made on this new power level.

Oems (HP, Dell, Acer, etc) btw are still insisting on NO extra pcb plugs.

The gfx people have reached a new era in their rnd, they need to surpass this poxy molex supply. They talk with the ATX people, who come up with a new type of connector (the 6 pin pcie for example). They produce their new paper ATX x.x and in turn PCI will catch wind of this - run a bunch of tests, do some studies, tell everyone they're happy, and bring out their new errata/addendum to the existing PCI standard.

Now the PCI's paper will include yet another raising of the power level supporting the *new* 6 pin pcie connector.

Repeat for the use of 2x 6pin, 8pin and 6+2pin, 2x 8pin and so forth.

Theoretically, the gfxcard people and the ATX people could be in discussion about a new 10 pin or 6+2+2 pcie connector. Each of the people will talk to each other, tests will be done, as per usual, regulations and studies will be re-issued with a new proposed power level. The ATX people say they'll bring out psus with 1x 8+2+2 connectors for the lower end of the market and 2x for ... possible dualing of these newer cards. Bringing a possible new power threshold that 2x 10pin connectors on a single gfxcard can handle. (purely theory, I can't see this happening with the proposed new die shrinks also - but who knows - it is possible)

As each of these groups talk with each other, conduct their own internal test, bars are continuously raised. A quoted paper stretching back to 2007 regarding the power levels for pcie use can only suggest about what is currently available. It, at that time, has little idea that 2x 8pin may become that popular.

One thing is for sure, stress to pcbs due to the plugging and unplugging, pull and such like of additional power sources is not favored. Which is why alot of oems dislike them. One of the defenses against a 2x 10pin power arrangement is that it emulates the mobo power connectors which would come too close to stressing the rear or top of the card. But ingenious inventions could work around it somehow. 3x 8pin is obviously suggesting a similar cable to what mobos have now. Sticking those at the back of an 11inch pcb is not wanted I don't think.
 
375watts, is that the max 2 x 8 pin + mobo power can give because AMD 6990 can draw 450watts with the faster BIOS, according to AMD's website a few days ago?

8 pin = 150 watts

mobo PCI express = 75 watts

i think
 
They would go along with the PCI-SIG suggestion of what they've ratified in conjunction with what the ATX x.x standard put forward as a method of power supply. i.e. "we've made an 8pin pcie connector" - and PCI-SIG adjust their documents accordingly once it's passed through the ecn.

If by "suggestion" you are referring to industry excepted standards then I agree. Trade organizations create standards, IP companies design to those standards, manufacturers build to those standards. The result is a larger and more stable market for the consumer which at the end of the day is what it's all about.

375w wouldn't be put forward as a limit due to heat disappation within a pc case as they know full well you can put 4x card in xfire/sli within a said case. 8x if you could the productivity servers you can install parallel gpu setups in.

Your right 375 watts was not put forward as total thermal limit within a PC case. 375 watts is the electrical/thermal limit of a single PCIE 2.0 slot. PCIE does not restrict the total number of potential PCIE 2.0 slots. ATX however seems to think that 7 expansion slots are enough. So for the most part 7 PCIE slots are the most you can get on a standard ATX motherboard.

To insist 375w was max and to allow the further addition of pcie x16 electical slots would be silly, don't ya think
wink.gif

No, because by putting that load on another PCB in another PCIE 2.0 slot the surface area for thermal dissipation has at least doubled. What is silly is suggesting putting that same thermal load "in this case 750 watts" on a single PCIE 2.0 card. Thank you for making my point for me.

EDIT: I have a feeling that maybe you're not seeing my view of how the system works, so I've put together a "brief" explanation in as best laymen's terms as I can:

That's true, best I can tell you think industry standards are just suggestion because there is no enforcement body "other than the marketplace". What you seem to be saying is that in theory a graphics chip manufacturer could design and build a single 750 watt graphics board. OK, sure in theory, my point is they wont because of those pesky industry standards. That and putting that much heat in that small a form factor without an extraordinary cooling solution is a good way to start a fire.

So this whole back and forth started because I said:

Nvidia has a serious power problem with the 590 if they increase the clocks on the 590 the power goes above PCI Express limmits. So overclock at you own risk. Just remember that in this case you are risking your motherboard as well as your graphics card.

and you replied

Nope. Every card with an additional power source plugged into the pcb in addition to the pcie slot feed, is intended to go over the pcie limits.

Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want.

The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build.

Time will tell, just keep an eye on those overclocking records.

Now the original question that I replied to was

"why is the Nvidia 590 clocked so low"

I stand by my answer, Nvidia says that the 590 at load draws 365 watts. Nvidia says that the clock rates they set for the reference 590 are to ensure compliance with PCIE electrical/thermal standards. The fact that Nvidia did not characterize this as a problem doesn't make it any less the case. The fact is if Nvidia could have set the reference clocks higher they would have. Nvidia would love to claim the fastest single graphics card title, As it stands AMD's 6990 holds that title, costs between $75.00 and $100.00 dollars less and uses less power. The one key drawback of the 6990 reference design is noise, cheers to Nvidia for making the 590 quite. That said, the OEMS already have aftermarket cooling in the pipeline for the 6990, soon they will be very quite as well and you can bet it wont have a $75.00 premium.

That said a pair of 6950 in crossfire will outperform both the 590 and the 6990. Two 6950 in crossfire will outperform the more powerful 570 in SLI because the 69xx cards scale better in dual and triple card configurations. There are very, very quite version of the 6950 available from several OEMS. The 6950 is a lot less expensive than any of the other above options. So I say if you need extreme performance and your smart (like a good value) get a pair of 6950's and call it a day.
 
Back
Top