Archive for May, 2011
Amazing that it’s been half a decade now since we got the new blade servers that we are running on (got them in June of 2006).
Sadly, 12GB in a server just isn’t what it used to be and the database servers are really starting to feel strained as they beg for more memory. Handling ~25GB of databases on servers with 12GB is just no bueno. I thought of upgrading the RAM in the blades, but the maximum RAM the BIOS supports is 16GB, and all the DIMM slots are in-use… so it more or less would mean tossing all the existing RAM and buying 160GB of new RAM. In the end spending that much money to gain 4GB per server isn’t really worth it (especially since 16GB wouldn’t really be enough either).
I started to do a little research on what would be some good options for upgrades… I think I want to stay away from blades simply because we did have the daughter card in the chassis fail once which took down all 10 blades (the daughter card controls the power buttons). Buying 2 complete sets of blades/chassis is overkill just so you have stuff still up if one complete chassis goes down. The “must haves” on my list are hot swappable drives with hardware RAID, hot swappable redundant power supply, some sort of 10Gbit/sec (or higher) connectivity for communicating between servers. On top of it all, the servers need to be generally “dense” (I don’t want to take an entire rack of space).
The Dell C6100 actually looked really nice at first glace…
- Super dense (4 servers in a 2U package)
- Redundant power supplies (hot swap)
- 24 (!!) hot swap drives
- Supports 10GbE or even QDR Infiniband (40Gbit/sec
- Age – the server itself came out about 18 months ago without any refresh. That means you have hard drive options and CPU options that are a year and a half old
- Price – OMG… a single unit loaded up with 4 nodes, drives, RAM, Infiniband, etc. works out to $54,709 (before tax).
The age factor really becomes an issue when it comes to the disk drives… you can get 15k rpm drives in a 2.5″ form factor, but Dell only offers 146GB models (there are 300GB models now). The CPU isn’t really too bad… Dell’s fastest offering is the Xeon X5670… 2.93Ghz, 6 core @95 watts (wattage is important because of so much stuff crammed in there). There is a slightly faster 95 watt, 6 core processor these days… the Xeon X5675… the same thing basically, just 3.06Ghz. 0.13Ghz speed difference isn’t a huge deal… but the hard drive difference is a big deal.
I started to think… well maybe I could just order it stripped down and then I could just replace the CPU/hard drives with better stuff. Doing that, you still end up spending about $60,000 because you end up with 8 Xeon processors (valued at about $1,500 each that you just are going to throw away).
Then I started to think harder… Wait a minute… Dell doesn’t even make their own motherboards (at least I don’t think so)… so maybe I could find the source of these super dense motherboards and build my own systems (or just find the source of something similar… which ended up being the case).
What do we have here??? The Supermicro SuperServer 2026TT-H6IBQRF is more or less the same thing… 4 servers per 2U, hot swap hard drives, same BIOS/chip/controllers… supports the same Xeon processor family (up to 95 watts)… And as a bonus, Infiniband QDR is built in (it’s a $2,596 add-on for the Dell server) as well as an LSI MegaRAID hardware RAID card (a $2,156 add-on for the Dell server).
So let’s add up the cost to build one of these things with the CPU/hard drives I actually would WANT…
- Chassis (includes 4 motherboards, 2 power supplies, hard drive carriers, etc. – $4,630.98
- Xeon X5675 3.06Ghz CPU – $1,347.84 each, so $10,782.72 for 8
- 8GB ECC/Reg DIMM – $124.26 each, so $5,964.48 for 48
- 600GB Seagate Savvio 10K.5 – $391 each, so $9,384 for 24 (also 410% more capacity, 21% faster and 20% more reliable than the 146GB options from Dell)
Add It All Up…
$30,761.88 would be the total cost (a savings of $24,138.82) and in the end you get slightly faster CPUs and *way* better hard drives. So in a single 2U package, you end up with 48 Xeon cores at 3.06Ghz, 384GB of 1333Mhz memory, 14.4TB of drive space (9.6TB usable after it’s configured as double redundant parity striping with RAID-6… which should be able to do more than 750MB/sec read/write). 8 gigabit ethernet ports and 4 Infiniband QDR (40Gbit) I/O.
Get 2 or 3 of those rigs, and you have some nasty (in a good way) servers that would be gloriously fun to run MySQL Cluster, web servers and whatever else you want.
So I was in a situation where I want to upgrade the operating system on 10 blade servers that are in a data center. The problem is I really didn’t want to sit there and install the new operating systems on each one by one. The other issue is the servers don’t have CD/DVD drives in them since are blades.
I have a couple older Xserve G5s in the facility, so I figured why not use them as network boot servers for the Linux machines? By default OS X Server has a service for NetBoot (which is not the same thing and can really only be used to boot other Mac machines). But Mac OS X Server also has all the underlying services already installed to make it able to be a server for PXE booting from normal Intel BIOS.
So what network services do we need exactly (at least for how I did it)? DHCP, NFS, TFTP and optionally Web if you do an auto-install like I did.
This was written up in about 5 minutes mostly so I wouldn’t forget what I did in case I needed to do it again. Some assumptions are made like you aren’t completely new to Linux/Mac OS X administration. You can also have pxelinux boot to a operating system selection menu and some other things, but for what *I* wanted to do, I didn’t care about being able to boot into multiple operating systems/modes.
Setting Up The Server
Mac OS X Server makes it really simple to get DHCP, NFS and Web servers online since the normal Server Admin has a GUI for each.
Mac OS X has a TFTP server installed by default, but there it’s not running by default and has no GUI. You can of course enable/configure it from the shell, but just to make things simple, there is a free app you can download that will make configuring and starting the TFTP service simple (the app is just a configuration utility, so it does not need to run once you are finished, so adds no overhead). You can grab the app over here.
Files Served By TFTP
I was installing openSUSE 11.4, so that is what my example revolves around… most Linux installations should be similar, if not identical. First, make sure you have syslinux, which you probably already have since most Linux distributions install it by default.
Copy /usr/share/syslinux/pxelinux.0 to your Mac OS X TFTP Server: /private/tftpboot/pxelinux.0
Create a directory on your Mac OS X Server: /private/tftpboot/pxelinux.cfg, and then create a file named default within that folder with the following:
Obviously change the IP address to your Mac OS X Server IP address. The autoyast part is optional and only needed if you have an auto-configuration file for YaST.
Now we want to grab two files from the installation DVD/.iso so we have the “real” kernel.
Copy /boot/x86_64/loader/linux from the installation DVD to your Mac OS X TFTP Server: /private/tftpboot/osuse11-4.krnl (notice the file rename)
Copy /boot/x86_64/loader/initrd from the installation DVD to your Mac OS X TFTP Server: /private/tftpboot/osuse11-4.ird (notice the file rename here too)
Special Options For DHCP Server
There is not a GUI for adding special options to the DHCP server, but it’s easy enough to add them manually. Just edit the /etc/bootpd.plist file, and add two keys/values to it (option 150 is the IP address of your TFTP server, option 67 is the kernel file name the TFTP server is serving):
NFS Sharing Of Installation DVD
Edit your /etc/exports file, and add the following line to it:
Then just copy the contents of the entire installation DVD to /images/opensuse10-4. You can of course just do a symlink or something if you want.
Cross Your Fingers…
Hopefully if all goes well, you should see something along the lines of this when you choose to network boot a machine (for purposes of this video, I did it in a virtualized environment of Parallels so I didn’t need to boot any running servers):