Budget storage server

Spent most of today trying to debug the problem and not getting very far with it. Decided it would be easier to debug down in the house rather than in the loft so I labelled all of the drives and cables, pulled the disks out and moved it downstairs. Once rebuilt, it was completely dead. I've tested the power supply on the old x79 system and that appears to be working fine. Took the working antec supply out of that and plugged it into this and got squat, won't even spin a fan. Tried testing it with no expansion cards and tried it with no ram same result. I suspect the motherboard has died.

The 790FX and phenom could go back in and get it working but a new board is definitely needed in order to accommodate the additional expansion card for 10Gbe. Dilemma is do I bite the bullet and go new and write off the 20GB of DDR3 I have laying around or find something used that will utilise that ram.
 
The budget side to this project just went out of the window. I have caved in and gone and bought the following after a long hard think.

i5 9600k
Asus Prime Z390-A
16GB Teamgroup vulcan 3000C16

Was looking to go ryzen with a 1600 or 2600, however, looking at the pcie lanes available, I'd not have enough to run the nvs310 gpu, raid card and 10Gbe NIC. That meant going with something with an igp. With ryzen there are only the 2200g & 2400g which are ok, but I wanted to move up to a hex core. That left me looking at intel, even though ryzen has 8 more pcie lanes, i'd have to tie 16 of them up with a gpu meaning i'd be 8 lanes worse off than going intel. Considered the i5-8400 as that's the cheapest 6 core they do, but considering the £40 difference between that and the 9600k oem chip I felt the 9th gen part was more sensible considering it's soldered and its a K sku. The challenge now will be to get the cooler to fit as I have an original prolimatech megahalems which only came with the 775/1366 mounting parts. I bought the amd retention kit separately back in 2009, but the 115x kit seems to be very hard to find these days.
 
Oh dear. :lol:


That's going to be some snappy storage, planning to do live encoding for streamed video or something?
 
Frankly it'll be well overpowered, so as usual I'll be under-volting it and possibly reducing the clock speeds too in order to save power. It'll get used for encoding purposes from time to time to allow me to switch my desktop off at night, but this isn't all that frequent.
 
Started work on the bracket mods needed in order to make my original megahalems (775/1366 only) cooler fit on 1151. Decided that the best approach would be to swipe a backplate from one of my other coolers and make it work with a mixture of mounting hardware. One noctua backplate combined with some bolts with the same threads as the original prolimatech parts and the noctua black plastic spacers filed down to match the thickness of the prolimatech metal spacers. The only bolts I had which were the correct thread and sufficient length had countersunk heads so I found some suitable washers for them. The upper mounting plates then needed the 775 holes filing out towards the 1366 mounting holes as 775 is 72mm spacing, 115x is 75mm and 1366 is 80mm.



Backplate fitted.


Mounting plates fitted.


Test run with paste to see how the spread looked. IMO, pretty much as good as any stock mounting setup.


Built. Currently tested and working ok, albeit, the install media for centos is not playing ball. It's locking up with a black screen after selecting install centos 7. The old opensuse 42.3 install boots ok, but it's definitely got driver issues as the gui is dog slow suggesting no gpu acceleration and again, no network interfaces are working.


In other news, the mellanox connectx-3 that I picked up for the server is as dead as a doornail. I've tried it in 4 different systems in a variety of different pcie slots and it doesn't show up as a device in either windows or linux.
 
These arrived in work today. The end is in sight now, just need to get another NIC for the server and some fibre.


At present I have only got one OM3 MM fibre patch cable so I can't test it properly but it was extremely pleasing to see this once I connected my machine up.
 
Finally finished sorting out the OS. Had a hell of a job dealing with setting up vncserver due to policykit issues making executing gui applications with elevated permissions difficult. Simple solution was to switch over to using x0vncserver instead which made a lot more sense as it means I'm not running a separate desktop. Still suffering from some intel i915 driver issues. Weirdly, the problems returned when I changed screen. Turning off the window compositing in xfce stopped the shed load of errors and it makes little difference to the usability of the OS.

Memory usage seems a tad high but from what I can tell, most of it is the lsi megaraid storage manager server. (1.7GB being used by java)


All that I need to sort now is finding a replacement SFP+ NIC for 10Gbe and re-cable the house. I've been looking at solarflare NICs as mellanox connect-x 3's are pretty expensive still and having had one DOA makes me less keen on them. Another intel x710 is a potential option albeit the most expensive.
 
Replacement NIC bought. After some lengthy research, I decided to try solarflare and picked up a SFN7122F card nice and cheap to replace the DOA Mellanox CX3.



Tested it on one of the windows machines first and all signs were good, although the drivers would not install. Kept causing an NDIS SYSTEM_THREAD_EXCEPTION_NOT_HANDLED bluescreen. Not sure on the cause, possibly the firmware on the card could have done with being updated. But the card showed a link with my intel SFP installed and windows recognised the device which was good enough for me. Swapped it back out with my X710 card and put it into the server and surprisingly, it worked straight from the get go. SFC9120 driver present and a 10Gb link. Yay. Now all that I need to sort out is running some fibres.


Things have been running well so far. Ironed out the vivaldi framework ram issues but plex DLNA server seems hell bent on slowly chewing through ram too.
 
I wouldn't look too much into the memory consumption until you actually risk running out - not much of it is actually in use.
 
I keep an eye on it every couple of days and it just slowly rises so it has the likelihood of managing to chew it's way through all 16GB if left unchecked.
 
Finally picked up a second OM3 fibre so I can do a test run at 10Gb. Performance in one direction looks pretty good, but not so good in the other direction.

Running my threadripper workstation with the intel x710 as the iperf server net 7Gbps with the 9600k/solarflare NAS as the client. The other way around only got 3.5Gbps. Currently I've adjusted the Tx/Rx buffers on the intel card to their maximums but not made any changes to the solarflare as I'm less familiar with linux driver tweaks.
 
Why not schedule a restart of the Plex every couple of days ? It should be pretty easy to automate.
 
Since the reboot after installing the new NIC, plex memory usage seems to have remained reasonably constant.

I'm now chasing 10gig ethernet issues on the windows machines.
 
The windows 1903 update seems to have mostly resolved the issues I was seeing with iperf, though I'm still getting better speeds to the server than I do from it.

I bought some OM3 fibres from FS along with some LC-LC links for the wall boxes I got from RS a few months back. Wall boxes should be pretty resilient as the fibres will enter at the base of the boxes keeping the connection virtually flush with the wall to avoid breaking them.


Since I now have sufficient fibres for a proper test run, it'd be rude not to. Just a quick test to make sure all ports and fibres work ok, I connected the server with two links using the 3 avago transceivers I have. All working great, the juniper doesn't seem to mind the avago transceiver.


Quick file transfer test from my pc to the server. This is off an nvme ssd to avoid sata bottlenecks. Normal transfers will be a fair bit slower sadly but at least now I can have two pc's copying to the server and two tv's streaming from it without interruption.
 
Bought some new fans for the juniper switch to try and quieten it down a bit without triggering a fan failure warning.

2x 40x40x28mm 12,500rpm San Ace fans instead of 18,000rpm. Idle fan voltage is 4.5V so something like a noctua that maxes out at 5000rpm would run too slow at idle.
A pack of molex plug bodies and pins for fan headers (as the fans above come with bare ends)
A pack of molex ATX pins and sockets for making up custom PSU cables in the future
A molex pin extraction tool for the ATX pins
Crimping dies for insulated and un-insulated terminals


Fitted and now tested. All working ok, significantly quieter both at idle and full speed and no fan failure warnings. Best of all, it is still idling at 44 degrees.


Server cpu temps looking rather chilly at this time of year.
 
Last edited:
First time I've had the RAID controller alarm go off today. Not sure on the cause as no disks show any media errors, the controller itself shows no errors and nothing wrong on the sas expander either.






Hopefully a reboot will sort it.
 
Just read through this whole thread, really impressive setup. I have been planning on turning my 1090T+GA-890FXA-UD7 into a storage server. Its getting a bit old now, but smoke what yer brung ey?
 
Just read through this whole thread, really impressive setup. I have been planning on turning my 1090T+GA-890FXA-UD7 into a storage server. Its getting a bit old now, but smoke what yer brung ey?
Nowt wrong with the older hardware. I’d still be running my phenom if I hadn’t needed more pice lanes. It was incredibly annoying when the 990FX board inexplicably died.

I just figured out the issue, the array should have 7 disks in it and only 6 are showing so it looks like my first disk failure after 7 years.
 
Replacement disk arrived plus an extra to run as a hot spare.


Rebuild started. I reckon the estimate is a tad optimistic. It'll still be going in 12 hours time let alone 6, probably longer than that.


Found the disk that has gone AWOL which is where I will put the hot spare once the rebuild finishes.
 
I decided that the old CM690 II case with it's limited drive support was going to become a hurdle. The failed backplane didn't help matters so I wanted to eliminate them from the build. I found that most cases that were large enough and supported the necessary quantity of disks had been discontinued already. The only stand out options that were still available were the Corsair 750D and the Fractal Define XL. The drive cages for the define were pretty much unobtainium whereas the cages for the corsair are still available. I prefer the fractal side panel without the window but c'est la vie.

New case bought. Went to order the drive cages from corsair directly and, typically, they had gone out of stock.


I'm hoping to find a fan bracket that fits the pcie slot brackets so I can get gooe airflow across the expansion cards as the controller, expander and 10Gbe card all get quite hot without direct airflow.
 
Back
Top