Budget storage server

Kei

Member
Decided I wanted to build a server/nas to help with backups etc so I've cobbled together all of my spare parts and collected a few freebies across the last few months. Here's how it looks at present:

Gigabyte MA-790FXT-UD5P
AMD Phenom II X4 955
Corsair Dominator 4GB 1600
Nvidia Quadro FX3500
LSI MegaRaid SAS 8888ELP
Delta 825W PSU
Belkin USB 3.0 card
Prolimatech Megahalems
Coolermaster CM690-II

Hard drive wise, at present I have a load of different disks ranging from 500GB to 1TB. (8x 500GB, 2x 640GB and a 1TB ) I reckon 8x 2TB drives on the HBA in RAID 5 (maybe 6) should suffice for main storage. (may start out with 4 due to cost and expand later) I can then use the 4x 500GB seagate constellation ES2 drives I already have in a RAID 10 array using the onboard sata for the OS giving 1TB mirrored, which should fit within the non EFI constraints for bootup. I'll need to get two ICYBOX backplanes that fit 3x 3.5" drives into 2x 5.25" bays. I'm hoping to use ubuntu server 14.04 on it too, not sure on the file system type yet though.



Trying to keep it reasonably neat, though i imagine cable chaos will ensue once I have all 12 disks fitted.



My old phenom II x4 955 which I bought back in 2008 and ran at 3.7GHz right up until january this year has a new lease of life. I will probably drop the clocks back down and see how low i can drop the voltage. (being an old C2 stepping it liked lots of volts) Get the feeling it'll be overkill for the intended purpose.


An HP branded LSI MegaRAID 8888ELP 8 port SAS HBA and an nvidia quadro FX3500. (again probably too powerful for the purpose)


Tried the PSU in the phanteks and it doesn't fit either. (even though the bolt locations are ATX, the physical size is WTX and is significantly bigger than ATX. Nicely made supply though.


Unfortunately upon testing, I have not been able to get any life from it, just powers on and sits there gpu fan spinning full tilt. Tried some basic bits like resetting the cmos, trying one memory module at a time and no drives connected, but no luck. So far I've found that the GPU, RAID card and memory is all still working perfectly. That leaves the psu, motherboard and processor. Though the motherboard, processor and ram were all working happily together before which leaves me with major doubts about the psu. I'll need to pull the psu out of the other pc to test tonight. If it is where the fault lies, I'll have to buy a new psu. (I have a sneaking suspicion that WTX pinout differs from ATX)
 
Last edited:
Swapped out the PSU for one out of the backup pc and it now works as it should so a new psu is on the cards. A low power unit should be perfect but will need to be efficient. Looking at a superflower golden green 450/550W unit for that.

I'm also looking at what fans to get as airflow will need to be high as the raid controller produces a lot of heat. I've been considering either high rpm eloops or the new Silverstone SST-FQ121 FQ gentle typhoon copies. Since there are 7 spaces to fill, I'm preferring the silverstones as they are cheaper. I have a 38mm thick matsushita which i may use either as an intake or exhaust, though I believe that it is very high rpm.

Last of all, I discovered that the SAS-SATA cable that came with the card is a reverse breakout cable and therefore doesn't work for my intended purpose. (doesn't even detect drives when plugged in)
 
Ordered the necessary parts on sunday. It'll be arriving in dribs & drabs this week.

This is the full spec now:

Phenom II X4 955
Gigabyte MA-790FXT-UD5P
4GB Corsair dominator 1600
LSI megaRAID 8888ELP HBA
Nvidia quadro FX3500 (looks to be the equivalent of a 7900GS)
4x 2TB WD Se drives (will expand to 8 further down the road, £800 on disks is a bit much, still deciding between RAID 5 or 6)
4x 500GB Seagate Constellation ES2 (RAID 10 for OS)
2x LSI CBL-SFF8087OCF-06M forward breakout cables
2x ICYBOX 553SK SAS/SATA backplanes (allows for 6 disks to be fitted into the 5.25" bays)
1x Noctua NF-F12 IndustrialPPC 3000RPM PWM (for the cpu cooler)
5x Scythe Kama Flow2 1900RPM Fan - 120mm (I'm hoping they are basically the same as the old S-flex series)
1x NMB-MAT 4715KL-04W-B40 120x38mm fan (ridiculously loud even at 7 volts)
1x Yate Loon D14BH-12 140x25mm fan (liberated from my TX650 psu)
550W Superflower Golden Green HX PSU
 
Bits from ocuk arrived this afternoon.


Finally starting to resemble a server. Not sure whether to have the side panel fan pull air in or blow it out.



I've tried to keep the cables reasonably tidy. No cables are tied in yet as I need to wait until I have the other parts that I'm waiting for. I reckon the SAS cables are going to be a nightmare to keep neat.
 
quick question, why do you have a gfx card in the system? if its just a server couldn't you just ssh into it for admin stuff?
 
quick question, why do you have a gfx card in the system? if its just a server couldn't you just ssh into it for admin stuff?

Why will be because the motherboard has no onboard graphics, so booting the machine will be a problem without it.
 
Pretty much as above. In any case, until I've actually installed the OS I wouldn't be able to remote in. According to the technical data the maximum power consumption is 80W which doesn't sound too bad considering it'll be at idle 90% of the time.
 
wait what, all that will only sip on 80W of power. How is that possible? Sorry if this is a stupid question, its just that I have no experience with this sort of thing.
 
I wish the whole lot would manage 80W, that's just the quadro gpu max consumption. The phenom II is a 125W cpu on it's own. Though I'm going to try and reduce that by dropping the clock speed and voltage.

Here's a rough idea on max power consumption:

Phenom II - 125W
Quadro 3500 - 80W
Motherboard - 70-75W (estimated)
LSI 8888elp - 19.2W
Seagate Constellation ES 500GB - 6.5W (each) (26W for 4)
WD Se 2TB - 7.3W (each) (58.4W for 8)
Scythe fans - 4.5W (each) (22.5W for 5)
Noctua fan - 3.6W
NMB-mat fan - 11W
Yate loon d14bh-12 - 9W
USB 3.0 card - 5W (estimated)

Total - 435W Maximum.

At idle this should be significantly lower.
 
ok makes way more sense, I was thinking 80W for that entire system would be nuts.

Why not get a more efficient cpu like a celeron with onboard graphics? could get one for about $50 and would only consume something like 30W
 
Have you ever used the SATA ports provided by the extra on-board Marvell Controller? ie. those that aren't connected to the AMD SB750 southbridge on that board. The extra 4 ports could come in handy on a storage server.

I've the same motherboard myself for over 5 years now, and I've always had the Marvell Controller disabled.
 
Last edited:
Have you ever used the SATA ports provided by the extra on-board Marvell Controller? ie. those that aren't connected to the AMD SB750 southbridge on that board. The extra 4 ports could come in handy on a storage server.

I've the same motherboard myself for over 5 years now, and I've always had the Marvell Controller disabled.

It may just be me, but I've always found the marvell controllers to be rather shoddy, then again I guess it might be alright for the storage server.
 
I did use the marvell ports a long while back when i ran this as my main system with 7 disks + ODD. Didn't really notice a difference in performance, but i know that they are slower. (odd really since marvell make some excellent ssd and lan controllers)

As to getting a lower powered system, the intention of this project was to utilise kit i already owned but wasn't using. (rather than leaving it sit gathering dust) I know economically it probably makes more sense to use efficient parts, but this project is expensive enough as the disks cost so much. I'll optimise it as best i can to reduce consumption. The old 3.7GHz overclock is definitely out as it needed 1.45V which probably pushed the consumption over 140W.

Rest of the kit arrived whilst I was in work today.
 
I'm loving that case.

Haswell Celeron would be a better CPU choice though. If you sold your old kit inc the Quadro, the net cost would be very small
 
True, I think it probably would be quite cheap to swap over if I sold my current kit on. I'd rather avoid the hassle of selling it though as it'd be a pain.

Seems I forgot to hit post yesterday night after finishing. The second backplane was an extremely tight fit and took some serious persuasion. Fans are a little loud but not a patch on a real server. It now works correctly and the raid card is finally seeing the disks connected to it. (inc those on the backplanes after I realised I'd plugged the wrong ports in)



The back could possibly be a bit tidier, but it could have been far worse. (so it'll do as it shouldn't impede airflow or affect functionality)


I left it initialising the array this morning. With 4 disks in a raid 5 array, I get 5.4TB of space, so with all 8 I should have 12.6TB. Some people will probably tell me that I'm nuts using raid 5 but it makes sense to me as the data isn't going to be crucial but at the same time some redundancy is nice as I'd rather not need to restore the whole lot if one disk fails. (Like I would if it was raid 0) Raid 6, 10 and 50 all consume more disk space than I'd like. (though a lot better for redundancy)
 
Ive personally never had a Raid array fail on me, have even had 5x2TB Seagate Barracudas in Raid 0 on the home computer so that everyone can store their data on there. Makes the 250GB Samsung Evo drive look tiny in terms of storage :D Managed to snag the drives for only $100 a piece a couple of years ago and have never failed me, even with data constantly being written and read ~20/7 of the time.
 
Went through the ordeal of trying to install ubuntu server onto the raid 10 array. First off I couldn't find it at all, just the LSI card and the raid 5 array. Tried setting the onboard SB750 sata back to AHCI and IDE modes then gave up and pulled the LSI card out. Drive list now shows empty. Gave up trying to use the gui method of installing and opted to try the CLI. Some moderate success was had here using parted and mdadm as the devices were present. (/dev/sda, sdb, sdc, sdd) Managed to fudge my way through creating a software raid using mdadm as device /dev/md0. Go back to installer and hey presto the device is listed as a 999GB disk ready to partition. Try to set it going using the partition tool and it fails to write changes to the disk. (even after I waited for the synchronisation to finish) 6 hours last night wasted.

Going to try and run the motherboard "fakeraid" again and see if i can find it using the CLI. (Something I didn't try) I can't work out how to get linux software raid to work. I can't use the LSI as I don't have sufficient SAS ports to run 12 disks and an expander is quite an expensive way to get one extra port. Might just give up and resort to windows server instead.
 
I've run into some difficulties trying to set it up as I can't persuade grub2 to boot an mdadm raid 10 array. I've been trying to sort it out for 3 days and have finally given up. I can see a few different choices available:-

1. Buy an hp sas expander card and use the LSI to run the raid 10 array in addition to the raid 5 array
2. Bin the whole raid 10 array idea and just use my spare ssd for the OS
3. Run windows server instead

The choice that seems most logical to me is #2 but buy the expander anyway as dumping the 4 OS disks out of the case would allow me to use all 12 for the storage array. (or more should I upgrade to a larger case)
 
As it stands I've gone the SSD route and will see about picking an expander up. I've run into yet more headaches with ubuntu trying to sort out samba sharing in 14.04 so I've decided to cut my losses there and try another distro instead. I'll be going with openSUSE this time as it seems to have better network/server support. If that fails, I'll give centOS a go before I give up and just go with windows.

Got another 2 disks for the array coming tomorrow too.
 
I have put openSUSE onto it and low and behold, everything now works as it should. Since dumping the raid 10 disks freed up 4 drive bays, I have also gone a picked up an intel Res2sv240 sas expander card to increase my port capacity to 16 dual linked. I can expand that to 20 if i single link it, and adding a second expander to the other port would give me 40. I can also daisy chain expanders too but I somewhat doubt I'll ever need that much space. When testing the array it completely saturated the gigabit lan connection managing a sustained write speed of 109mb/s for large video files. Smaller files dropped down into the mid 50's. Read speeds seem to be nice and high at ~4-500mb/s. (this was using just 4 disks)

I also got remote desktop via vnc to work too.


I also dropped another 2 disks into the array this morning to expand the storage up to a shade under 10TB. Reconstructing the array is taking a very long time though. It's taken 12 hours to get to ~40%. The icybox backplanes look great when running
 
Back
Top