APOLLO (2CPU LGA1366 Server | InWin PP689) - by alpenwasser

Amazing how you managed to get even this amount of bright red cables looking tidy :)
THIS!! Fantastic job of doing that so neatly without custom length cables! :notworthy:

P.S. all that redundancy and only a single drive for the OS? Shame, shame. :D
 
That really is the definition of third time's the charm ;), looks so tidy, makes me feel slightly ashamed of my cables now :(.
 
THIS!! Fantastic job of doing that so neatly without custom length cables! :notworthy:

Haha, yeah custom length SAS/SATA cables, that would really be something! :wub:

And thanks! :)

P.S. all that redundancy and only a single drive for the OS? Shame, shame. :D

Hehe, it will be backed up to the Velociraptor (just haven't built it in yet),
but in the end the host OS isn't that important, the important things are
the virtual machines (which will be backed up). So basically, if the SSD fails,
I can just get a new one, do a clean install (which can be done very quickly
because it's a very basic setup), copy-paste the VMs and I'm back up and
running again.

I'll probably get another SSD for HELIOS at some point, then I'll use the one
I have in there now as a spare for APOLLO.

EDIT:
I should probably mention that having a bit of downtime is not a huge deal
on this machine as long as it's not too long (like, a few hours). Otherwise I
would have RAID1-ed the OS drive, of course. But in an emergency I can always
use the Velociraptor as a temporary system disk, since I have it laying around
anyway. Until I get another SSD that will be adequate.
/EDIT

That really is the definition of third time's the charm ;), looks so tidy, makes me feel slightly ashamed of my cables now :(.

Haha, yup it is indeed, thanks! :)

I recommend you get to work on your cables then! :whipping: :p
 
Last edited:
The grill-less heat sink fan is makin' me anxious!!! LOL

Haha, yeahI thought about mounting grills to the fans (especially those
high-rpm ones in the middle wall), but all the cables are pretty taught and
don't go anywhere, so it's not really an issue, and the machine won't be
opened while it's running, so there's no danger to anyone's fingers either.
 
Storage and Networking Performance

Storage and Networking Performance



Beware: This post will be of little interest to those
who are primarily in it for the physical side of
building. Instead, this update will be about the performance
and software side of things. So, lots of text, lots of
numbers. :D

These results are still somewhat preliminary since I'm not
yet 100% sure if the hardware config will remain like this
for an extended period of time (I really want to put another
12 GB of RAM in there, for example, and am considering
adding some SSD goodness to my ZFS pools), nor am I
necessarily done with tuning software parameters, but it
should give some idea of what performance I'm currently
getting.

As you may recall from my previous update, I'm running three
VMs on this machine, two of which are pretty much always on
(the media VM and my personal VM), and the third of which is
only active when I'm pulling a backup of my dad's work
machine (apollo-business).



NOTE: I know there's lots of text and stuff in my
screenshots and it may be a bit difficult to read. Click
on any image to get the full-res version for improved
legibility. :)


The storage setup has been revised somewhat since the last
update. I now have a mirrored ZFS pool in ZEUS for backing
up my dad's business data (so, in total his data is on six
HDDs, including the one in his work machine). His data is
pulled onto the apollo-business VM from his work machine,
and then pulled onto ZEUS. The fact that neither the
business VM nor ZEUS are online 24/7 (ZEUS is turned off
physically most of the time) should provide some decent
protection against most malheurs, the only thing I still
need to implement is a proper off-site backup plan (which
I will definitely do, in case of unforeseen disasters,
break-ins/theft and so on).


(click image for full res)



The Plan

For convenience's sake, I was planning on using NFS for
sharing data between the server and its various clients
on our network. Unfortunately, I was getting some rather
disappointing benchmarking results initially, with only ~60
MB/s to ~70 MB/s transfer speeds between machines.


Tools

I'm not really a storage benchmarking expert, and at the
moment I definitely don't have the time to become one, so
for benchmarking my storage I've used dd for the time
being. It's easy to use and is pretty much standard for
every Linux install. I thought about using other storage
benchmarks like Bonnie++ and FIO, and at some point I might
still do that, but for the time being dd will suffice for my
purposes.

For those not familiar with this: /dev/zero basically
serves as a data source for lots of zeroes, /dev/null is a
sink into which you can write data without it being written
to disk. So, if you want to do writing benchmarks to your
storage, you can grab data from /dev/zero without needing to
worry about a bottleneck on your data source side, and
/dev/null is the equivalent when you wish to do reading
benchmarks. To demonstrate this, I did a quick test below
directly from /dev/zero into /dev/null.

Basically. It's a bit of a simplification, but I hope it's
somewhat understandable. ;)


Baseline


Before doing storage benchmarks across the network, we
should of course get a baseline for both the storage setup
itself as well as the network.

The base pipe from /dev/zero into /dev/null transfers has a
transfer speed of ~9 GB/s. Nothing unexpected, but it's a
quick test to do and I was curious about this:


(click image for full res)



For measuring this I used iperf, here's a screencap from one
of my test runs. The machine it's running on was my personal
VM.

Top to bottom:
- my dad's Windows 7 machine
- APOLLO host (Arch Linux)
- HELIOS (also Windows 7 for the time being, sadly)
- ZEUS (Arch Linux)
- My Laptop via WiFi (Arch Linux)
- APOLLO business VM (Arch Linux)
- APOLLO media VM

The bottom two results aren't really representative of
typical performance, usually it's ~920 Mbit/s to ~940
Mbit/s, But as with any setup, outliers happen.


(click image for full res)



The networking performance is where I hit my first hickup.
I failed to specify to the VM which networking driver it was
supposed to use, and the default one does not exactly have
stellar performance. It was an easy fix though, and with the
new settings I now get pretty much the same networking
performance across all my machines (except the Windows ones,
those are stuck at ~500 Mbit/s for some reason as you can
see above, but that's not hugely important to me at the
moment TBH).

This is representative of what I can get most of the time:

(click image for full res)



I had a similar issue with the storage subsystem at first,
the default parameters for caching were not very conducive
to high performance and resulted in some pretty bad results:

(click image for full res)



Once I fixed that though, much better, and sufficient to
saturate a gigabit networking connection.


(click image for full res)




Networking Benchmark Results

Initially, I got only around 60 MB/s for NFS, after that the
next plateau was somewhere between 75 MB/s and 80 MB/s, and
lastly, this is the current situation. I must say I find the
results to be slightly... peculiar. Pretty much everything
I've ever read says that NFS should offer better performance
than CIFS, and yet, for some reason, in many cases that was
not the result I got.

I'm not yet sure if I'll be going with NFS or CIFS in the
end to be honest. On one hand, CIFS does give my better
performance for the most part, but I have found NFS more
convenient to configure and use, and NFS' performance at
this point is decent enough for most of my purposes.

In general, I find the NFS results just rather weird
TBH. But they have been reproducible over different runs on
several days, so for the time being I'll accept them as what
I can get.


Anyway, behold the mother of all graphics! :D


(click image for full res)



FTP

As an alternative, I've also tried FTP , but results were
not really very satisfying. This is just a screenshot from
one test run, but it is representative of the various other
test runs I did:

(click image for full res)



ZFS Compression
Also, for those curious about ZFS' compression (which was
usually disabled in the above tests because zeroes are very
compressible and would therefore skew the benchmarks), I did
a quick test to compare writing zeroes to a ZFS pool with
and without compression.

This is CPU utilization without compression (the grey bars
are CPU time spent waiting for I/O, not actual work the CPU
is doing):

(click image for full res)


And this was the write speed for that specific test run:
(click image for full res)



With lz4 compression enabled, the CPU does quite a bit more
work, as expected (though it still seems that you don't
really need a very powerful CPU to make use of this):

(click image for full res)



And the write speed goes up almost to a gigabyte per second,
pretty neat if you ask me. :D

(click image for full res)



Side note: ZFS' lz4 compression is allegedly smart enough
not to try to compress incompressible data, such as media
files which are already compressed, which should prevent
such writes from being slowed down. Very nice IMHO.



That's it for today. What's still left to do at this point
is installing some sound-dampening materials (the rig is a
bit on the loud side, even despite being in its own room),
and possibly upgrading to more RAM, the rest will probably
stay like this for a while. If I really do upgrade to more
RAM, I'll adjust the VMs accordingly and run the tests
again, just to see if that really makes a difference. So far
I have been unable to get better performance from my ZFS
pools by allocating more RAM, or even running benches
directly on the host machine with the full 12 GB RAM and
eight cores/sixteen threads.


Cheers,
-aw
 
Last edited:
Sound Dampening & Final Update

Sound Dampening, Final Pics


As mentioned previously, the 92 mm fans are rather noisy,
but I didn't want to replace them. For one thing, I do
actually need some powerful fans to move air from the HDD
compartment into the M/B compartment, on the other hand I
didn't feel like spending more money on expensive fans.

For this purpose, I ordered some AcoustiPack foam in various
thicknesses (12 mm, 7 mm and 4 mm) and lined parts of
the case with them. I wasn't quite sure how well they
would work, as my past experiences with acoustic dampening
materials weren't all that impressive, but to my surprise,
they're actually pretty damn effective.

I have also put in another 12 GB or RAM. I was lucky enough
to get six 2 GB sticks of the exact same RAM I already had
for 70 USD (plus shipping and fees, but still a pretty good
price IMHO) from eBay. 24 GB should easily suffice for my
purposes.


Lastly, I've repurposed the 2.5" drive cage from my Caselabs
SMH10; cleaner than the rather improvised mount from before.



For the time being, the build is now pretty much complete.


Cost Analysis

One of the original goals was to not have this become
ridiculously expensive. Uhm, yeah, you know how these things
usually go. :rolleyes:

Total system cost: ~5,000 USD
of which were HDDs: ~2,500 USD

My share of the total cost is ~42%, the remainder was on my
dad, which is pretty fair I think. In the long run, my share
will probably rise as I'll most likely be the one paying for
most future storage expansions (at the moment I've paid for
~54% of the storage cost, and ~31% of the remaining
components).

One thing to keep in mind though is that some of these costs
go back a while as not all HDDs were bought for this server
but have been migrated into it from other machines. So the
actual project costs were less by about 1,300 USD.

Overall I'm still pretty happy with the price/performance
ratio. There aren't really that many areas where I could
have saved a lot of money without also taking noticeably
hits in performance or features.

I could have gone with a single-socket motherboard, or a
dual socket one with fewer features (say, fewer onboard
SAS/SATA ports as I'm not using nearly all of the ones this
one has due to the 2 TB disk limit), but most of the
features this one has I wouldn't want to miss TBH (the four
LAN ports are very handy, and IPMI is just freaking
awesome). And let's be honest: A dual-socket board just
looks freaking awesome (OK, I'll concede that that's not the
best argument, bit still, it does!). :D

Other than that, I could have gone with some cheaper CPU
coolers as the 40 W CPUs (btw., core voltage is ~0.9 V :D)
don't really require much in that area, but the rest is
pretty much what I want need for an acceptable price.


Anyway, enough blabbering:


Final Pics

So, some final pics (I finally managed to acquire our DSLR
for these):

(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)



That Caselabs drive cage I mentioned. The top drive is the
WDC VelociRaptor.

(click image for full res)



And some more cable shots, because why not.

(click image for full res)


(click image for full res)


(click image for full res)



Looks much better with all RAM slots filled IMHO. :D

(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)


(click image for full res)



It's kinda funny: Considering how large the M/B compartment
actually is, it's pretty packed now with everything that's
in there. The impression is even stronger in person than on
the pics.

(click image for full res)




Thanks for tagging along everyone, and until next time! :)
 
brb changing pants

lol, thx! :D

That Sata cable management is amazing!

I love this build so much more than I thought I would :D

You are here by labeled the cable massiah! :lol:


Thanks guys, much appreciated! :)

I must admit that I am indeed pretty happy with the cabling (then again,
after three bloody tries it better be up to snuff :D). It could still be
improved somewhat with longer cables, but they're pretty expensive,
and I'm not spending another 100 USD on cables just to get them slightly
tidier. :crazy:
 
Back
Top