Several people have setup some pretty decent/usable home labs… Here are a few to point out:
- Kendrick Coleman – vSphere Home Lab – “The Green Machines”
- Jeramiah Dooley – VMware vSphere Home Lab: Keeping up with the Joneses
- Tommy Trogden – The VMware Home Lab
- Duncan Epping – My Homelab
- Simon Seagrave – VMware ESX(i) Home Lab; Why, What and How? Considerations when building your own home lab.
As I mentioned in a previous post, picking hosts can be a daunting task. The process of picking hosts includes taking a few things into account. I looked at desktop configurations that would support vSphere and server configurations that vSphere supports.
Two of the things that I found to be very common in the comparison… (Also keep in mind that Micro ATX is the form factor I chose for space)
- If I configured a desktop board that “has been known” to work,
- I could get a motherboard what was reasonably priced.
- RAM that is compatible is very reasonable priced.
- There is only a single onboard nic.
- The onboard nic was more than likely not supported, and additional nics are required.
- The available expansion slots ranged from 1-2 PCI-X slots, 1-4 PCI express slots.
- AMD gave better options for Hex Core CPUs.
- If I configured a server board that “is verified” to work,
- More often than not, ECC RAM was required. And as I mentioned in my previous post, ECC RAM is not cheap.
- One or more onboard nics are available, and often times, are supported by ESXi without requiring additional drivers.
- The motherboard is more expensive than a desktop motherboard.
What to I sacrifice to get the best of both worlds? RAM at a better price, while having to purchase more nics? How many cores do I really need…
Do I need more cores like Jeramiah chose? Did he give anything up by choosing AMD over Intel? Is VMDirectPath supported on the AMD desktop based boards? Not sure.
Also, because many SandyBridge server boards support the Intel Pentium G620(T)/G840(T)/G850 and Core i3-2100(T)/2105/2120, I had some options. The Pentium GXXX processors have dual cores, but no Hyperthreading, while the Core i3-2100 series procs have dual cores and Hyperthreading.
Also, because I chose to buy 2 hosts, and not 5 like Jeramiah, I wanted to be sure I could run nested ESXi boxes and have enough cores for the nested ESXi hosts as well. So I had to choose hosts that had at least 4 cores. That knocks the Pentium G6xx/8xx processors out of the running. The Core i3-2100 series procs had the right number of logical cores, but not the number of physical cores. I almost picked the Core i3-2100T, a low power version of the Core i3-2100. But I’m pretty certain that I’m going to need at least 4 cores running nested ESXi hosts from time to time.
So I decided to go with the Xeon E3-1200 series processors. The E3-1220 offers 4 cores, but unfortunately no Hyperthreading. I picked the Xeon E3-1230 processor as the processor of choice, because it has 4 cores with Hyperthreading. Another deciding factor, was that it supports VMDirectPath. This is another featureset that the Core i3-2100 series doesn’t support. In the Core i3/5/7 series processors, VMDirectPath is not supported (along the line) until the Core i5-2400. The Core i5 processors unfortunately aren’t supported on the server boards I looked at. The processors were $235 each.
I wanted a small footprint for the hosts, so I made sure the motherboard I picked was a Micro ATX form factor. I picked a Micro ATX slim case for the hosts, in the event I ever decide to keep them at the house. Also, 1U server cases are pretty much about double the price of the slim case, and I’m trying to keep things reasonable. Right now I’m planning on putting them in a datacenter I have remote access to. I’ll cover that a little more when I talk about my boards. The cases ended up being about $50 each. One thing to keep in mind, is that because I’m using a server board, the power supply should be able to support it.
Picking RAM was a little aggravating. What kind of RAM does my board support? I mean REALLY support? After picking a board, I went to the vendor’s website and looked at supported RAM. Whoa… $120 for 8GB of ECC RAM (2x4GB). That’s kind of high in comparison to desktop DDR3 memory, which averages around $60-$90 for 8GB (2x4GB) of RAM.
The board I selected will take up to 32GB of RAM, but unfortunately no 16GB (2x8GB) ECC memory is listed as supported. To be honest, I found that to be the case across most of the boards I looked at. Only the boards that were out of my price range supported 8GB memory sticks. And 16GB (2x8GB) 0f ECC memory seemed to hover around $180-$350. Maybe later, I’ll upgrade to 32GB when more memory SKUs are supported on my boards.
I did find some supported memory for $48 for 4GB of ECC RAM (Kingston KVR1333D3E9S/4G). That comes out to $96 per 8GB. Much more attractive than $120, especially when buying 32GB across 2 hosts. That’s a savings of $24×4 or $96. That puts it in line with desktop DDR3 memory. With memory not significantly more expensive, I was alright with going the ECC route. Total memory cost was about $384.
Now on to the boards. I wanted a board that was on the VMware Compatibility Guide, but wasn’t too expensive. I really wanted to stay with a SandyBridge board/processor combination. That limitation wasn’t really helpful. If you look at the HCL, there aren’t many SandyBridge systems listed. In fact, only 2 motherboards that support the Xeon E3-1200 series are on the HCL. They are the Intel S1200BTL and Intel S1200BTS.
What is the difference between these 2 systems? The BTL has a C204 chipset and the BTS has a C202 chipset. What’s the difference between the two? Suffice to say that the C204 is a little better grade than the C202, with the C204 supporting SATA III, while the C202 doesn’t. Remember that I wanted a Micro ATX footprint, and the BTS is the only one that is Micro ATX. Another thing I would like to add, is the ability to use IPMI and iKVM capabilities. The S1200BTL supports iKVM with an add-on card. Again, the BTL is not a Micro ATX form factor. A little bit of a debacle here. The features I want, are on the board I don’t want (size-wise).
Because there were a couple C202 and C204 based systems, along with the Intel S1200BT boards, on the VMware HCL, I figured that other C20x boards would also work. It has been discussed several times on the VMTN forums how the Compatibility Guide is validated, and just because a system/board isn’t on the list, it doesn’t mean it won’t work. It just means it hasn’t been tested.
I started looking at some of the C20x boards available. I looked at ASUS, SuperMicro, and TYAN. Several of these had multiple nics onboard, some with only one being recognized by ESXi 4.1, some with both. I tried to narrow my results to those where both nics were supported. As I mentioned earlier, I wanted a board that supported iKVM, given that I’ll be putting these in a remote datacenter. Some boards had native support, while others required a $50+ add-on card. I also wanted to keep my cost down as well.
I ended up going with a TYAN S5510GM3NR. I picked this board, because all 3 onboard nics are recognized (as mentioned in the product reviews on NewEgg), and one could be also used for iKVM, without an add-on card. I also liked the fact that the S5510GM3NR also has a C204 chipset, supporting SATA III, if I decide to put a SATA III drive in it for local VM storage. The board retailed for about $180.
Because the Core i3/i5/i7 SandyBridge processors include onboard graphics, I had to take that into consideration. This is because not all Xeon E3-1200 processors have onboard graphics support. The Tyan board includes onboard graphics, mitigating the need for a video card.
One caveat to this board, is that it is an EPS12V and not an ATX12V board. Standard power supplies for desktop systems (like the one I selected) typically are ATX12V and not EPS12V. I’m not going to go into the details of the differences, but suffice to say, I initially thought I was going to have to select a different case, to accommodate an EPS12V power supply. ATX12V boards have an additional 4 pin power connector (other than the standard 20+4), while EPS12V boards have an additional 8 pin power connector. A 4 to 8 pin converter did the trick for me.
Also, with the board being unsupported, I didn’t expect the Manufacturer/Model to be listed as “empty” in the ESXi interface. Not really a big deal, given that all the components display properly in the Hardware Status tab.
The boards have 3 onboard nics that are supported, but I wanted to add additional nics to be able to separate my management/VM/vMotion network from my storage network. Again, I wanted to keep the build price as cost effective as possible, so I picked 2 Intel EXPI9301CTBLK nics that others have seen success with. I would have liked a single dual gigabit nic like Jeramiah chose, but I really didn’t want to pay $150 a nic, where the EXPI9301CTBLK was only $25, that totals out at about $100 for the additional nics.
Another thing I didn’t realize, is that the iKVM nic, is also used as an ESXi nic. So a single nic has a dual use for ESXi and for iKVM. All 5 nics are reported as Intel 82574L.
I was working in Atlanta a couple weeks ago, and went by Micro Center. I noticed they had a 60GB SSD SATA II drive for $100. The drive is a rebranded A-DATA S599 drive with some good reviews. I wanted to get an SSD drive for each host, so I could leverage vSphere 5’s host cache feature. The drives retailed for $99.99 each ($89.99 today), plus $3.50 for some 2.5 to 3.5 rail kits.
Below is the cost of the 2 systems.
|Memory||Kingston 4GB Unbuffered ECC RAM||$47.99||8||$383.92|
|Nic||Intel EXPI9301CTBLK Single Port Network Adapter||$24.99||4||$99.96|
|Case||Rosewill R379-M Slim MicroATX Case w/Power Supply
|SSD||64GB Microcenter G2 SSD||$99.99||2||$199.98|
|Adapter||2.5″ Hard Drive Mounting Bracket||$3.49||2||$6.98|
|4 to 8 pin power adapter||Athena Power 8″ Extension & Conversion Four-In-One Cable||$4.99||2||$9.98|
That’s not what I wanted to spend… I could have gotten away for something like $1,200 for both of the hosts had I gone with a desktop board, desktop DDR3 memory, not being able to use the onboard nic (Update: Kendrick Coleman blogged about the RealTek 8111e working), no SSDs, and no iKVM. That would be about $600 per host. If I divide the cost by 2, each host costs $825.38. Given that the SSDs were about $100 each, had I not even added them (and used USB sticks instead) each host would have been about $725. I think having 5 supported nics, iKVM, a quad core (Hyperthreaded) processor, and local SSD for $825 each is a pretty good deal. Also, keep in mind I have 8 logical cores and VMDirectPath support.
Now I’m off to find a decent switch that will accommodate both hosts and a dual port storage system.
BTW I’m not picking on Jeramiah’s choices, they are just as valid. Thanks Jeramiah for being such a good sport. I’m looking forward to your Part 3 post.
Update 07/30/11: I have a newer post that details how I could have done it for a little less money.
Update 08/02/12: I have an even newer post that compares 2011 cost vs. 2012 cost, including working 8GB UDIMMs.