PSUs taking a pounding

K404

New member
http://www.extremetech.com/article2/0,1697,1932947,00.asp

404 ideas (in homage to "Mavs comment LOL:D") :

do not assume that the "SLI certified" label in the PSU box is a gurantee- most of the original PSU certifications were done on the 6 series of cards, and even though the 7 series are most power efficient, there`ll pulling more power overall. The X19 series is a complete unknown, so technically no guarantees can be given.

Its being suggested that X19 CF setups are given single-rail PSUs for better stability. The largest single-12V I can think of is the Tagan in combined mode, and that only gives 35A.

If the 7800GTX 512 can suck up 11A fully loaded BY ITSELF, an SLI setup needs 22A alone, that leaves 13A MAXIMUM for the CPU, HDDs, cold cathodes etc etc etc. This aint enough.

A multi-rail PSU can provide more amps over multiple 12V rails, but if the power draw on one rail is too large, the system OVP kicks in to prevent damage. There appears to be only one PSU (an FSP 700 GLN) can safely supply unbalanced 12V draws- the only limiting factor is the total ouput wattage.

Twin PSUs might be (read as "probably ARE") the way until PSU makers can sort this out.

Kenny
 
My X1800XL card uses 10-11 amps when idle and can go as high as 21-22 under load and this is with the card @ stock.

My hiper psu handles the rig fine though and it has dual 12v rails:)
 
I think this is pretty symptomatic of the computer industry at the moment, we need two of everything now.

Dual core CPUs because they cant ramp them any higher, SLI and Xfire GPU setups as they just want to be head of the benchmarks, now we need dual PSUs to run all of this.

Apart from dual core CPUs as there is little alternative to them, im going to stick with a single computer, not two stuck together costing a fortune, sucking down a tonne of juice and chucking out a tonne of heat.

G
 
Interesting read K404,

Good to see PCP&C passing the tests (especially as my 850w beauty has a 16a PCI-E rail) - also interesting to see the power draw of the 7800GTX 512MB being 11amps (at stock full bore), thats 22a in total.

I think I read somewhere that the X1900 was more in the region of 13amps per card - a staggering 26amps in total.

Good news is that the 90nm 7900 will have a lower power draw than the 110nm 512MB 7800 (some 12% lower full bore).

Of course you can juggle connections around to overcome this (usually just using one PCI-E connector and then adpating a molex connection for the 2nd card) - but FSP (I think it is) is in the process of releasing a VGA only PSU (slots into 5 1/4" drive bay) to help overcome this issue.

GPU's in teh future will use DDR4. DDR4 is supposed to help with the ram power draw on the VGA cards and of course the GPU's are getting smaller (80 and 65nm in the next 12 months) - unfortunately they will go faster so dont expect big savings there.
 
*Hugs 520w Powerstream* 33A over a single 12v rail.

Might the dawn of dual PSU's though perhaps, single beasty supply for the mobo/gfx and maybe a smalll shuttle type hidden away powering the hard drives, opticals, fans and any extra's

Those are figures for stock cards though, dread to think what a pair of fully modded up and overclocked cards are going to be pulling
 
I'm using a 850SSI & when I got the fx60 above 3.4Ghz 1.55V, ram @250 3.3V I can't get my x1900xtx past 680/900 without have the ovp kick in on the psu & a system reboot:O

How many amps do the x1900xtx's draw on the 12V live because atitool reports 26.5amps draw when @ 730\900 under heavy 3D load:O

With a £300 psu I don't expect this:@
 
LiViNgL@rGe said:
I'm using a 850SSI & when I got the fx60 above 3.4Ghz 1.55V, ram @250 3.3V I can't get my x1900xtx past 680/900 without have the ovp kick in on the psu & a system reboot:O

How many amps do the x1900xtx's draw on the 12V live because atitool reports 26.5amps draw when @ 730\900 under heavy 3D load:O

With a £300 psu I don't expect this:@

Firslty dont despair my friend - be mindful that the PCP&C 850w unit kicks 16a into the two PCI-E ports, 34a into the 8 pin connector and has two other 17a rails - use one PCI-E connector from the PSU and use a molex - PCI-E connector to supply the other card ( I suggest you feed off the molex line that goes to you fan controller and hard disks if possible).

All you problems will just float away ;)
 
Having brought the 850w myself i never thought i would have to upgrade again :O .Let alone after 12 months,i wonder how the 1kw beast would cope?
 
name='scorchio' said:
Having brought the 850w myself i never thought i would have to upgrade again :O .Let alone after 12 months,i wonder how the 1kw beast would cope?

No better as its over more rails I would have thought? :?

Well I think it'd mean you can use the supplied pci-e connectors and not use a reg molex.
 
Before we all go an panic buy a PSU - lets see if Living Large cures hi problem by shifting the load of one card off the PCI-E connector found on the PSU and using two molex's adn PCI-E adaptor to power the card.

I think I also recall that the RDX200 chipset itself ws power hungry (not 100% sure though)

Mav

Well I think the other thing that I am starting to consider now when u talk about the size and type of PSU's required for UBER rigs - the more I see the need to employ a 2nd machine for 24/7 useage.

With rising energy costs my machine is starting to cost a fortune to run 24/7

Lest we forget that a system can run Games perfectly (60+ FPS) with A64 4000 and a 7800GTX with full eye candy at 1024x768 and with a little compromise at 1280x1024 and reduced AA&AF at 1600x1200 for the more energy conscious out there.

Quote about load balancing from Extrem Tech:

A Question of Balance



It turns out that throwing bigger power supplies at the problem isn't necessarily the issue. Part of the issue revolves around power supply design. The rapid escalation of power usage by today's high performance graphics cards has taken some manufacturers by surprise.

Kelt Reeves of Falcon Northwest was caught unawares when another Web site reviewed one of his systems, which had been configured with a fairly light load-out—except for two 512MB 7800 GTX cards. Falcon Northwest uses the 600W version of the same Silverstone power supply and was surprised when the reviewer experienced system shutdowns.

"We've loaded up systems with two 512MB 7800 GTX cards, four 10,000 RPM hard drives and two optical drives and never had a problem with the Silverstone power supplies," Reeves noted in a phone conversation.

What happens is that some power supplies are designed with a shared power plane. According to Tony Ou, of Silverstone technical marketing, in an email:
"I am sure you already know that PC power supplies we have today have three main rails, +3.3V, +5V, and +12V, which are required to power various components. However, what most people don't know is that many power supplies are designed with shared power plane (it is very common to have +5V and +12V rails linked together) to help reduce cost and obtain higher maximum power. If you have seen our retail box for our ST60F 600W model or read the manual, you will see this:

+5V min. load is 10A when +12V output is 30A to 38A

+5V min. load is 15A when +12V output is 38A to 42A This means that in order for our power supply to generate maximum power for +12V rail optimally, the +5V must also be loaded up. Normally this is not a problem because most systems do draw enough power from both +5V and +12V rails evenly to make cross loading requirement a non-issue. Even when SLI came out in late 2004, the most powerful gaming system at the time would rarely draw more than 30A from the +12V rail..."

In other words, if you balance the loads on the different rails, then you won't have this kind of problem. So we took PCI Express power adapter cable to give this idea a test. These cables consist of two Molex four-pin connectors on one end, and a six-pin PCIe connector on the other end.

Using the same Silverstone 650W power supply as before, we ran the second 512MB 7800 GTX with the adapter connected to a different cable. At this point, everything ran without a hitch. This explains why Falcon Northwest didn't run into this problem. Most people who order a high-end gaming rig don't just ask for SLI or CrossFire. They load them up with multiple hard drives, lots of RAM, and other goodies, which results in a more balanced load

 
maverik-sg1 said:
Firslty dont despair my friend - be mindful that the PCP&C 850w unit kicks 16a into the two PCI-E ports, 34a into the 8 pin connector and has two other 17a rails - use one PCI-E connector from the PSU and use a molex - PCI-E connector to supply the other card ( I suggest you feed off the molex line that goes to you fan controller and hard disks if possible).

All you problems will just float away ;)

Alright Mate,

I'm only running 1 graphics card:@, im gonna try linking the 2 pci-e connectors into 1 so that should provide 32a's to my card but I don't know if its ok to do that?

All i'm running is - FX-60, X1900XTX, RDX200 Mobo, 1 HDD, 2 x 512 BH-5 & the mach ofcourse!!!

The rails on the psu can't handle the draw of all this full boar, the main thing I can't understand is why my 3.3V line has got 3.18V but there's nothing using it???

The FX60 really drags the rails down the 12V line goes from 12.18V stock fx60 down to 11.9V with FX60 running 3.45 1.55V!!!!

------------------------------------------------------------------------------

Any advice for connecting the 2 pci-e cables 2gether would be appreciated

Thanks

Dave
 
Back
Top