APOLLO (2CPU LGA1366 Server | InWin PP689) - by alpenwasser

alpenwasser

New member
APOLLO (2CPU LGA1366 Server | InWin PP689) - by alpenwasser [COMPLETE: 2014-MAY-10]




Table of Contents

  1. 2013-NOV-13: First Hardware Testing & The Noctua NH-U9DX 1366
  2. 2013-NOV-16: Temporary Ghetto Setup, OS Installed
  3. 2014-APR-01: PSU Mounting & LSI Controller Testing
  4. 2014-APR-02: The Disk Racks
  5. 2014-APR-08: Chipset Cooling & Adventures in Instability
  6. 2014-APR-09: Disk Ventilation
  7. 2014-APR-11: Fan Unit for Main Compartment Ventilation
  8. 2014-APR-12: Storage Topology & Cabling
  9. 2014-APR-26: Storage and Networking Performance
  10. 2014-MAY-10: Sound Dampening & Final Pics


Wait, What, and Why?

So, yeah, another build. Another server, to be precise. Why? Well, as nice of a
system ZEUS is, it does have two major shortcomings for its use as a server.

When I originally conceived ZEUS, I did not plan on using ZFS (since it was not
yet production-ready on Linux at that point). The plan was to use ZEUS' HDDs as
single disks, backing up the important stuff. In case of a disk failure, the
loss of non-backed up data would have been acceptable, since it's mostly media
files. As long as there's an index of what was on the disk, that data could
easily be reaquired.

But right before ZEUS was done, I found out that ZFS was production-ready on
Linux, having kept a bit of an eye on it since fall 2012 when I dabbled in
FreeBSD and ZFS for the first time. Using FreeBSD on the server was not an
option though since I was nowhere near proficient enough with it to use it for
something that important, so it had to be Linux (that's why I didn't originally
plan on ZFS).

So, I deployed ZFS on ZEUS, and it's been working very nicely so far. However,
that brought with it two major drawbacks: Firstly, I was now missing 5 TB of
space, since I had been tempted by ZFS to use those for redundancy, even for our
media files. Secondly, and more importantly, ZEUS is not an ECC-memory-capable
system. The reason this might be a problem is that when ZFS verifies the data on
the disks, a corrupted bit in your RAM could cause a discrepancy between the
data in memory and the data on disk, in which case ZFS would "correct" the data
on your disk, therefore corrupting it. This is not exactly optimal IMO. How
severe the consequences of this would be in practice is an ongoing debate in
various ZFS threads I've read. Optimists estimate that it would merely corrupt
the file(s) with the concerned corrupt bit(s), pessimists are afraid it might
corrupt your entire pool.


The main focus of this machine will be:

  • room to install more disks over time
  • ECC-RAM capable
  • not ridiculously expensive
  • low-maintenance, high reliability and availability (within reason, it's still
    a home and small business server)


Hardware

The component choices as they stand now:

  • M/B: Supermicro X8DT3-LN4F
  • RAM: 12 GB ECC DDR3-1333 (Hynix)
  • CPUs: 2 x Intel L5630 Quad Cores, 40 W TDP each
  • Cooling: 2 x Noctua NH-UD9X 1366 (yes, air cooling! :o )
  • Cooling: A few nice server double ball bearing San Ace fans will also
    be making an appearance.
  • Case: InWin PP689 (will be modded to fit more HDDs than in stock config)
  • Other: TBD


Modding

Instead of some uber-expensive W/C setup, the main part of actually building
this rig will be in modifying the PP689 for fitting as many HDDs as halfway
reasonable as neatly as possible. I have not yet decided if there will be
painting and/or sleeving and/or a window. A window is unlikely, the rest depends
mostly on how much time I'll have in the next few weeks (this is not a long-term
project, aim is to have it done way before HELIOS).

Also, since costs for this build should not spiral out of control, I will be
trying to reuse as many scrap and spare parts I have laying around as possible.


Teaser

More pics will follow as parts arrive and the build progresses, for now a shot of the
case:

(click image for full res)



That's all for now, thanks for stopping by, and so long. :)
 
Last edited:
anonymity protecting long legged ostriches. i expect nothing but the best.
i also love the combination of cheap, xeon and ecc ram.
 
anonymity protecting long legged ostriches. i expect nothing but the best.
i also love the combination of cheap, xeon and ecc ram.

Haha, yeah it took me a while to find something for which APOLLO could stand. :rolleyes:

The L5630's were 60 USD a piece plus 20 USD shipping, less than a tenth of my X5680's
for HELIOS. The M/B was 200 USD plus 50 USD shipping plus 50 USD VAT ( :( ), still
pretty cheap considering it once cost nearly 600 USD. Come to think of it, the Noctua
CPU coolers were actually more expensive than the CPUs themselves... :lol:

I looked around quite a bit until I found the right balance. There's no need for high
performance equipment, so originally I thought I'd go with a very low-end LGA1155
single socket Xeon, but the M/B's for that platform which have some halfway decent
features are actually still pretty expensive, and the CPUs themselves are nowhere
near as cheap as their 1366 counterparts (you can get some L5639 hexacores
for ~80 USD a pop on eBay, which would actually be pretty neat if you get them onto
an SR-2 and get a decent overclock, so if I ever burn out my X5680's... :o ).

The one downside of LGA1366 server M/B's is that most of their integrated SAS
controllers do not support HDDs larger than 2 TB, but that can be worked around
by either having more HDDs of smaller sizes or buying a fairly cheap, slightly newer
host bus adapter card as an add-on.

I must however say I quite like the fact that there is much better vendor support for
Linux if you buy server equipment. :cool:


EDIT:
Besides the more pragmatic reasons: Dual socket systems are just cool IMO. :D
 
Well this should be interesting, if previous build logs are anything to go by. ;-)

It won't be as ridiculous as HELIOS, but I'm hoping to provide some good entertainment
nonetheless. So thanks!

Epic name :lol: and subscribed of course!

What can I say, I have a penchant for Greek mythology... :rolleyes:

A build log with your name on it,
I had no chose but sub to it

Haha, sorry for force your hand. :D
 
im in!!

Also im quite interested on your opioion about ZFS, i been looking at converting my windows file server to a FreeNAS implementation and using 6x3TB disk in parity, just wondering if id see performance increase
 
First Hardware Tests & The Noctua NH-U9DX 1366

First Steps


Hardware Tested

M/B, CPUs and memory have all arrived. The CPUs and M/B seem to be working OK.
One of the memory modules seems to be having a bit of trouble being recognized,
the other five work fine. I'll see if it's really defective or if it's just the
IT gods screwing with me a bit.


The Noctua NH9DX 1366

The Noctua NH-U9DX 1366 is a cooler from Noctua's series specifically made for
Xeon sockets. For those who don't know, LGA1366 sockets have an integrated
backplate, just like LGA2011, which makes them much more convenient than their
desktop counterparts. It's quite a nice and sturdy backplate, too, in fact it's
among the most solid backplates I've come across yet. This does, however,
require a slightly different mounting system. You just have four screws which
you bolt directly into the plate.

Aside from that, the cooler is identical to its desktop counterpart as far as I
know. Why the 92 mm version? For one thing, it was in stock, unlike the 120 mm
version of this cooler. Also, the CPUs only produce 40 W TDP each, so there
really is no need for high-end cooling. And as a bonus, I got supplied some
awesome San Ace fans with my case, which also happen to be 92 mm.

The Noctua fans which come with the cooler are just 3 pin fans (the newer models
of this cooler for LGA2011 come with a PWM fan I think), but the San Ace fans I
got with my case are actually PWM controlled! Since the M/B has a full set of
PWM headers (8, to be exact, how awesome is that!? :D ) I will try the San Ace
fans and see how they play on lower rpm's (they run at 4,800 rpm on full speed
:o ). This does not need to be a super-silent machine since it will be in its
own room, and since I really like the San Ace fans with regards to build quality
(and I'm a total sucker for build quality) I'd love to use them for this. The
Noctuas would admitteldy be better suited, but I'll see how things go with the
SA's first.


The Box

Unlike its shiny desktop counterparts, the NH-U9DX comes in a nice and subtle
(but sturdy) cardbord box with a simple sticker on it. I must admit I like this
box more than the shiny ones. :)

(click image for full res)



Contents

How it looks packaged...

(click image for full res)


... and out in the open.

(click image for full res)



Noctua Pr0n

A few glory shots of the cooler itself...

(click image for full res)


(click image for full res)



The San Ace 9G0912P1G09

There is no info about this fan on the web, I'm presuming it's something San Ace
makes specifically for InWin in an OEM deal.

I've hooked it up to a fan controller and got a max reading of 4,800 rpm, and
the Supermicro board turns them down to ~2,200 rpm on idle. They seem to be very
good fans, you can only really hear the sound of the air moving, no bearing or
motor noises so far. Also, they are heavy (~200 g per piece), which is always
nice for a build quality fetishist such as myself. :D

Note: Hooking such a fan up to a desktop board as its power source would not be
advisable, they are rated for 1.1 A and might burn out the circuits on a desktop
board. Server boards usually have better fan power circuitry since they are
desinged with high-performance fans in mind. Just as a side note.

(click image for full res)



Compared to the Noctua fan which comes with the coolers. I might still go with
the Noctuas, but it's not the plan at the moment.

(click image for full res)



The Noctua NH-U9DX 1366 San Ace Edition

I had to improvise a bit with mounting the San Ace's to the tower. The clips
which you'd use with the Noctua fans rely on the fan having open corners, which
the San Ace's do not. Ah well, nothing a bit of cotton cord can't fix. :D

(click image for full res)



And the current config in its full glory:

(click image for full res)



Side note: The coolers were actually more expensive than the CPUs. :lol:


That's it for now, thanks for stopping by.
 
Oooo nice shiny bits

Yup, I must admit I quite like the Noctua coolers, it's a nice change of pace from W/C
gear for once. :)

wish all cpu were cheaper than the coolers :)

Be careful what you wish for, we might end up with 2000 USD heat sinks... :lol:

But yeah, there are some awesome deals for non-high-end LGA1366 Xeons on eBay
at the moment from companies sometimes dumping hundreds of them at a time
when they upgrade a client's server farm. I got CPUs, M/B and RAM form such deals
(albeit from different companies).

Seeing as a current-gen CPU really isn't necessary unless you absolutely need those
last few percents of performance it's a pretty good option IMO. I'd rather have something
mid-high end from older generations than lower-end from current gen, unless
current-gen has some killer feature that I want/need. But such features are likely
in mid-high end parts anyway, so you'd have to buy the expensive current-gen
stuff, not the lower end parts.
 
Last edited:
should have been more careful I was thinking cooler prices straying the same and cpu coming down
and then one day I'll wake up from that dream
Mr E.bay is my best friend
 
should have been more careful I was thinking cooler prices straying the same and cpu coming down
and then one day I'll wake up from that dream

Haha, I gathered as much, but I'm a stickler for loopholes in logic. :lol:

Mr E.bay is my best friend

He has been very kind to me as well I must say.

im in!!

Also im quite interested on your opioion about ZFS, i been looking at converting my windows file server to a FreeNAS implementation and using 6x3TB disk in parity, just wondering if id see performance increase

Sorry, just saw your post now, thanks for joining us! :)

So far I'm very impressed by ZFS, but there are a few caveats when it comes
to performance, especially once you start playing around with parity. I'll post
some numbers when APOLLO is up and running, don't hesitate to remind me
should I forget. ;)
 
So far I'm very impressed by ZFS, but there are a few caveats when it comes
to performance, especially once you start playing around with parity. I'll post
some numbers when APOLLO is up and running, don't hesitate to remind me
should I forget. ;)

ahh excellent, ill remind you dont worry ;) loving the coolers by the way
 
ahh excellent, ill remind you dont worry ;) loving the coolers by the way

Excellent, and thank you! :)

Up and Running, Ghetto Style


Hardware Validation

I've put the system together temporarily to validate the M/B, CPU and memory, so
far all seems good. A minimal Arch Linux setup has been installed and is
successfully running BOINC at the moment. :)

EDIT:
I'm not running BOINC as a hardware validation tool, that's not what it's
designed to do. I have (mostly) validated the hardware and am now just running
BOINC.

Just to clarify. ;)
/EDIT

Gotta love low-power CPUs, core temps after about an hour of running BOINC on
all cores are:
31 C, 31 C, 35 C, 30 C,
32 C, 26 C, 29 C, 31 C


(click image for full res)



Feast on the Ghetto-ness!

Yeah... :D

(click image for full res)



Next Up

I'll need to order some supplies for modding the front part of the case for more
HDDs. Still not sure if I'll paint it. Can't paint it in the apartment, and
temps in my workshop in the basement have dropped significantly since we now
have just a few degrees above freezing outside, so conditions for spray painting
are not optimal at all at the moment.
 
Last edited:
Why not send the inside off to be powdercoated?

Or did you mean paint all of it?

Not too bad of an idea, and yes I'm just talking about the insides, the outside actually
has a pretty nice powdercoat. I might end up making a custom front plate, and
would probably then paint that as well.

I'm a bit hesitant because for one thing the case is actually pretty solidly built and will
be a bitch to take apart and put back together (lots and lots and lots of rivets :rolleyes: )
and with college giving me plenty to do at the moment, time is a bit at a premium
right now.

Secondly, since this machine will be crucial for my dad with him using it as his
main business storage server (with backups elsewhere as well of course), I'm not
sure how enthusiastic he'll be about the idea of having something painted that,
technically speaking, really doesn't need to be painted for it to do the job he's
asking it to do.

Also, not painting it would allow it to get up and running sooner, which he'd also
appreciate of course. He's usually very enthusiastic about my PC projects, but this
might be pushing it a bit even for him. ;)

I would love to have black insides on it though, possibly with purple/violet sleeving.
Since purple and green are two colours that usually go pretty well together, that
would allow me to have a properly colour-coordinated rig despite having a server
M/B with that green PCB in there. Of course it's still a matter of taste, but that's
what's come to mind so far.

But thanks for the suggestion, somehow powder coating had completely slipped my
mind. If I end up going for painted insides that definitely looks like the best option
(as long as I can find somebody who does it for a halfway decent price). :)

If I don't end up painting it, I'll just do things as cleanly as possible to have it at
least look very tidy.


Cheers,
-aw
 
bloody hell alpenwasser, is there ever a point where you doing have a build on the go haha
 
If it purpose is to be a server and you are not going to put a window panel on it I don't think there is much reason to paint/powder coat the inside. As much as it would be nice to see and make a good build log, it just is not necessary.

Definately good prices on the hardware, curious how well it performs. I have plans for a home server/test bench with virtualisation so could do with something with a little bit of guts about it.
 
Back
Top