Virtualization project

hmmblah

Administrator
Staff member
I'm currently in the middle of a virtualization project at work using VMware vSphere and shared SAN storage. Thought I would share a few pics as I imagine most of you don't get to see this sort of thing.

This is an Equallogic PS6100X iSCSI SAN. It has 24 300GB 10k RPM SAS drives that will be in RAID 6 so will have ~6.6TB of usable space. I got a tremendous deal on this unit. Paid less than 50% of the value.

ps6100x.jpg


Here's a shot of the back where the controllers are. All 8 gigabit ports will be used for best performance, connecting to a private iSCSI network with 2 Cisco gigabit switches connected via flex cable. This allows high availability and performance.

ps6100xcontrollers.jpg


I will be connecting two Dell R710 rackmount servers to the SAN. These will be my VMware hosts. Both have dual xeon CPUs, one server with 24 threads, the other has 16 threads.

Below is the memory upgrade going into both servers. Each package is a 24GB kit with 144GB going into each server for a total of 288GB.

640kb.jpg


Anyway, just thought I would share as I'm pretty geeked out over the whole project.
 
Yep for work. I spec'd that SAN out on my Dell account, came to just under $30k. Dell sold it to me for $12,999. That's the price of their lower end PowerVault line. The memory was just over $2k, about $180 a kit.
 
Yep for work. I spec'd that SAN out on my Dell account, came to just under $30k. Dell sold it to me for $12,999. That's the price of their lower end PowerVault line. The memory was just over $2k, about $180 a kit.

Imagine the price if it was ECC memory lol (assuming it isn't because 24GB of ECC RAM is about £500)

that looks unbelievable though
 
It's a legitimate question! Imagine the PPD from something like that. I doubt your work would be too happy with the idea, mind you.
 
Well the SAN itself is just high speed storage. No crunching WUs there. The servers used to be able to crank out 100K PPD each using -bigadv. I "tested and burnt in" the CPUs with F@H for a few weeks last year. With the points how they are now though it's probably more like 50k PPD each. No overclocking on the servers themselves. PPD/$ ratio is pretty bad for this sort of thing.

As far as running servers though, these will run quite a few guests, starting out with 10 and growing/converting physical servers to virtual over the next 6 months. What took 3/4 of a rack before will take up far less, 6U, and I'll have high availability and ease of backups, not to mention very easy deployment of new servers. Oh and far less electricity to run it all.

will there be any update pics in the near future or is everything sort of done now cos i want MOARRR?

Still waiting on parts to come in so haven't started yet. I'll update this thread as I go.
 
Last edited:
Small update. Over the weekend I updated my hosts to ESXi 5.1. Also upgraded the memory and added another network card to each server. Now each server has 6 NICs. I updated the BIOS on the servers as well as there were some performance and memory fixes.

I ran into a little problem with the memory. When all slots are full it drops the memory speed from 1333 to 800. This is a pretty big difference and will effect performance. I opted to remove 6 sticks from each server so instead of 144GB each, they now have 96GB each, but all is running at 1333MHz instead of 800MHz. All this really means is I will have to add another host server or use 16GB sticks sometime in the future. For now I will still have plenty of room to grow.

Dell PowerEdge R710 with the hood off:
r710open.jpg


Close up of the internals:
r710close.jpg


This is with the memory slots full, but like I said above I did end up removing 6 sticks:
fullslots.jpg


Additional network card installed:
bc.jpg


VMware running:
vmware.jpg



I am now working on getting the SAN up and running. As of this morning the firmware has been updated to the latest version. I also setup the iSCSI network using two stacked Cisco 2960S switches. I need to run 4 more network cables to each server then the real fun will start.

Using vMotion I'll transfer all the guests from one server to the other. Then I'll be able to take one server offline, remove all the hard drives, and install a 2GB USB stick into the internal USB port. Then I'll install ESXi 5.1 again, this time onto the USB stick. When that is up and running I'll vMotion all the guests to this new server and redo the second server using a USB stick for the ESXi install.

Lastly, I will vMotion some of the guests back to the other server to distribute the load across both of them.
 
Hi hmmblah.
Are you doing anything fancy with vmotion/iscsi regarding fault tolerance?

I've got a couple of 710's kicking around here with an old equilogic san (not supported in ESX 4!!!) which I'm about to start some Fault Tolerant tests on.

Just wondered if you had played with the HA/FT capabilities of VCentre/Sphere whatever the hell they are calling it this week.

This is the kind of stuff I work on all the time, although having a nightmare now because the leap in performance between 9th and 10th gen servers is just mad. And our customer has to have "the best" we are using 1950's as XP workstations.............................

Cheers
Lee
 
Not meaning to hijack at all...........but just thought I would post this in here knowing Tom's anal attention to cable mess.
This is a system I installed about 3 years ago and there are 2 racks like that.

photo-2.jpg


Sadly I can't fold on it...............it's not even supposed to exist.........;)
 
Hi hmmblah.
Are you doing anything fancy with vmotion/iscsi regarding fault tolerance?

I've got a couple of 710's kicking around here with an old equilogic san (not supported in ESX 4!!!) which I'm about to start some Fault Tolerant tests on.

Just wondered if you had played with the HA/FT capabilities of VCentre/Sphere whatever the hell they are calling it this week.

This is the kind of stuff I work on all the time, although having a nightmare now because the leap in performance between 9th and 10th gen servers is just mad. And our customer has to have "the best" we are using 1950's as XP workstations.............................

Cheers
Lee

I'm pretty new to vCenter so still having a play with it all. I'm using a separate isolated physical network for my iSCSI/vMotion traffic. I can lose one switch and still have the network up. There are also dual controllers in my EqualLogic box with automatic failover if an interface dies. On my servers I am using 4 NICs for iSCSI/vMotion traffic so I should be covered there as well.

I finished the wiring to the servers this morning so I've moved a couple VMs to the new datastore. The process was quick and there was no interruption of service.

As far as the HA features of vCenter 5.1, I haven't touched them yet. I will be using AppAssure for backups, that's some pretty cool software. Also looking into the AV features of 5.1. Would be nice not to have to install AV clients on the guests.
 
Back
Top