March 17, 2024

Home Lab Hosts – Just in time for vSphere 5

I’ve been looking at what type of hardware for a home lab for quite a while.

Several people have setup some pretty decent/usable home labs… Here are a few to point out:

As I mentioned in a previous post, picking hosts can be a daunting task.  The process of picking hosts includes taking a few things into account.  I looked at desktop configurations that would support vSphere and server configurations that vSphere supports.

Two of the things that I found to be very common in the comparison… (Also keep in mind that Micro ATX is the form factor I chose for space)

  1. If I configured a desktop board that “has been known” to work,
    • I could get a motherboard what was reasonably priced.
    • RAM that is compatible is very reasonable priced.
    • There is only a single onboard nic.
    • The onboard nic was more than likely not supported, and additional nics are required.
    • The available expansion slots ranged from 1-2 PCI-X slots, 1-4 PCI express slots.
    • AMD gave better options for Hex Core CPUs.
  2. If I configured a server board that “is verified” to work,
    • More often than not, ECC RAM was required.  And as I mentioned in my previous post, ECC RAM is not cheap.
    • One or more onboard nics are available, and often times, are supported by ESXi without requiring additional drivers.
    • The motherboard is more expensive than a desktop motherboard.

What to I sacrifice to get the best of both worlds?  RAM at a better price, while having to purchase more nics? How many cores do I really need…

CPU
Do I need more cores like Jeramiah chose? Did he give anything up by choosing AMD over Intel?  Is VMDirectPath supported on the AMD desktop based boards? Not sure.

Also, because many SandyBridge server boards support the Intel Pentium G620(T)/G840(T)/G850 and Core i3-2100(T)/2105/2120, I had some options.  The Pentium GXXX processors have dual cores, but no Hyperthreading, while the Core i3-2100 series procs have dual cores and Hyperthreading.

Also, because I chose to buy 2 hosts, and not 5 like Jeramiah, I wanted to be sure I could run nested ESXi boxes and have enough cores for the nested ESXi hosts as well.  So I had to choose hosts that had at least 4 cores.  That knocks the Pentium G6xx/8xx processors out of the running.  The Core i3-2100 series procs had the right number of logical cores, but not the number of physical cores.  I almost picked the Core i3-2100T, a low power version of the Core i3-2100.  But I’m pretty certain that I’m going to need at least 4 cores running nested ESXi hosts from time to time.

So I decided to go with the Xeon E3-1200 series processors.  The E3-1220 offers 4 cores, but unfortunately no Hyperthreading.  I picked the Xeon E3-1230 processor as the processor of choice, because it has 4 cores with Hyperthreading.  Another deciding factor, was that it supports VMDirectPath.  This is another featureset that the Core i3-2100 series doesn’t support.  In the Core i3/5/7 series processors, VMDirectPath is not supported (along the line) until the Core i5-2400.  The Core i5 processors unfortunately aren’t supported on the server boards I looked at.  The processors were $235 each.

Case+Power Supply
I wanted a small footprint for the hosts, so I made sure the motherboard I picked was a Micro ATX form factor.  I picked a Micro ATX slim case for the hosts, in the event I ever decide to keep them at the house.  Also, 1U server cases are pretty much about double the price of the slim case, and I’m trying to keep things reasonable.  Right now I’m planning on putting them in a datacenter I have remote access to.  I’ll cover that a little more when I talk about my boards. The cases ended up being about $50 each. One thing to keep in mind, is that because I’m using a server board, the power supply should be able to support it.

RAM
Picking RAM was a little aggravating.  What kind of RAM does my board support?  I mean REALLY support?  After picking a board, I went to the vendor’s website and looked at supported RAM.  Whoa… $120 for 8GB of ECC RAM (2x4GB).  That’s kind of high in comparison to desktop DDR3 memory, which averages around $60-$90 for 8GB (2x4GB) of RAM.

The board I selected will take up to 32GB of RAM, but unfortunately no 16GB (2x8GB) ECC memory is listed as supported.  To be honest, I found that to be the case across most of the boards I looked at.  Only the boards that were out of my price range supported 8GB memory sticks.  And 16GB (2x8GB) 0f ECC memory seemed to hover around $180-$350.  Maybe later, I’ll upgrade to 32GB when more memory SKUs are supported on my boards.

I did find some supported memory for $48 for 4GB of ECC RAM (Kingston KVR1333D3E9S/4G).  That comes out to $96 per 8GB.  Much more attractive than $120, especially when buying 32GB across 2 hosts.  That’s a savings of $24×4 or $96.  That puts it in line with desktop DDR3 memory.  With memory not significantly more expensive, I was alright with going the ECC route.   Total memory cost was about $384.

Boards
Now on to the boards.  I wanted a board that was on the VMware Compatibility Guide, but wasn’t too expensive.  I really wanted to stay with a SandyBridge board/processor combination.  That limitation wasn’t really helpful.  If you look at the HCL, there aren’t many SandyBridge systems listed.  In fact, only 2 motherboards that support the Xeon E3-1200 series are on the HCL.  They are the Intel S1200BTL and Intel S1200BTS.

What is the difference between these 2 systems? The BTL has a C204 chipset and the BTS has a C202 chipset. What’s the difference between the two? Suffice to say that the C204 is a little better grade than the C202, with the C204 supporting SATA III, while the C202 doesn’t.  Remember that I wanted a Micro ATX footprint, and the BTS is the only one that is Micro ATX. Another thing I would like to add, is the ability to use IPMI and iKVM capabilities.  The S1200BTL supports iKVM with an add-on card.  Again, the BTL is not a Micro ATX form factor.  A little bit of a debacle here.  The features I want, are on the board I don’t want (size-wise).

Because there were a couple C202 and C204 based systems, along with the Intel S1200BT boards, on the VMware HCL, I figured that other C20x boards would also work.  It has been discussed several times on the VMTN forums how the Compatibility Guide is validated, and just because a system/board isn’t on the list, it doesn’t mean it won’t work.  It just means it hasn’t been tested.

I started looking at some of the C20x boards available.  I looked at ASUS, SuperMicro, and TYAN.  Several of these had multiple nics onboard, some with only one being recognized by ESXi 4.1, some with both.  I tried to narrow my results to those where both nics were supported.  As I mentioned earlier, I wanted a board that supported iKVM, given that I’ll be putting these in a remote datacenter.  Some boards had native support, while others required a $50+ add-on card.  I also wanted to keep my cost down as well.

I ended up going with a TYAN S5510GM3NR.  I picked this board, because all 3 onboard nics are recognized (as mentioned in the product reviews on NewEgg), and one could be also used for iKVM, without an add-on card.  I also liked the fact that the S5510GM3NR also has a C204 chipset, supporting SATA III, if I decide to put a SATA III drive in it for local VM storage.  The board retailed for about $180.

Because the Core i3/i5/i7 SandyBridge processors include onboard graphics, I had to take that into consideration. This is because not all Xeon E3-1200 processors have onboard graphics support.  The Tyan board includes onboard graphics, mitigating the need for a video card.

One caveat to this board, is that it is an EPS12V and not an ATX12V board.  Standard power supplies for desktop systems (like the one I selected) typically are ATX12V and not EPS12V.  I’m not going to go into the details of the differences, but suffice to say, I initially thought I was going to have to select a different case, to accommodate an EPS12V power supply.  ATX12V boards have an additional 4 pin power connector (other than the standard 20+4), while EPS12V boards have an additional 8 pin power connector.  A 4 to 8 pin converter did the trick for me.

Also, with the board being unsupported, I didn’t expect the Manufacturer/Model to be listed as “empty” in the ESXi interface.  Not really a big deal, given that all the components display properly in the Hardware Status tab.

NICs
The boards have 3 onboard nics that are supported, but I wanted to add additional nics to be able to separate my management/VM/vMotion network from my storage network.  Again, I wanted to keep the build price as cost effective as possible, so I picked 2 Intel EXPI9301CTBLK nics that others have seen success with.  I would have liked a single dual gigabit nic like Jeramiah chose, but I really didn’t want to pay $150 a nic, where the EXPI9301CTBLK was only $25, that totals out at about $100 for the additional nics.

Another thing I didn’t realize, is that the iKVM nic, is also used as an ESXi nic.  So a single nic has a dual use for ESXi and for iKVM.  All 5 nics are reported as Intel 82574L.

SSD
I was working in Atlanta a couple weeks ago, and went by Micro Center.  I noticed they had a 60GB SSD SATA II drive for $100.  The drive is a rebranded A-DATA S599 drive with some good reviews.  I wanted to get an SSD drive for each host, so I could leverage vSphere 5’s host cache feature.  The drives retailed for $99.99 each ($89.99 today), plus $3.50 for some 2.5 to 3.5 rail kits.

The Cost
Below is the cost of the 2 systems.

Item Model Cost Quantity Total
Motherboard Tyan S5510GM3NR $189.99 2 $379.98
CPU Intel E3-1230 $234.99 2 $469.98
Memory Kingston 4GB Unbuffered ECC RAM $47.99 8 $383.92
Nic Intel EXPI9301CTBLK Single Port Network Adapter $24.99 4 $99.96
Case Rosewill R379-M Slim MicroATX Case w/Power Supply
$49.99 2 $99.98
SSD 64GB Microcenter G2 SSD $99.99 2 $199.98
Adapter 2.5″ Hard Drive Mounting Bracket $3.49 2 $6.98
4 to 8 pin power adapter Athena Power 8″ Extension & Conversion Four-In-One Cable $4.99 2 $9.98
Total $1,650.76

$1,650.76.

That’s not what I wanted to spend… I could have gotten away for something like $1,200 for both of the hosts had I gone with a desktop board, desktop DDR3 memory, not being able to use the onboard nic (Update: Kendrick Coleman blogged about the RealTek 8111e working), no SSDs, and no iKVM.  That would be about $600 per host.  If I divide the cost by 2, each host costs $825.38.  Given that the SSDs were about $100 each, had I not even added them (and used USB sticks instead) each host would have been about $725. I think having 5 supported nics, iKVM, a quad core (Hyperthreaded) processor, and local SSD for $825 each is a pretty good deal. Also, keep in mind I have 8 logical cores and VMDirectPath support.

Now I’m off to find a decent switch that will accommodate both hosts and a dual port storage system.

BTW I’m not picking on Jeramiah’s choices, they are just as valid.  Thanks Jeramiah for being such a good sport. I’m looking forward to your Part 3 post.

Update 07/30/11: I have a newer post that details how I could have done it for a little less money.

Update 08/02/12: I have an even newer post that compares 2011 cost vs. 2012 cost, including working 8GB UDIMMs.

56 thoughts on “Home Lab Hosts – Just in time for vSphere 5

    1. Very nice setup! I’d love to have that amount of RAM… But I wanted to keep my config as cost effective as possible.

      How much were your 410s?

  1. I got a special price, costed me about 3200$ (but i’m really happy about it even if the price is quite high) and really wanted to have something totally manageable (drac card through IPv4/IPv6) even through my different business trips to setup things really fast.

    Tell me how your setup goes feels very cheap & effective 🙂 (You can even put 32Gb per server but don’t know how much cost the appropriate ram for it, will check)

  2. Just curious. Just in time for Vsphere 5. I assume you use the free version of esxi in vsphere 5 it has an 8Gb memory limit. Why buy that amount of memory when you can’t use it?

    1. I can start with the base install for 60 days with up to 48GB of RAM without a license.

      If I choose after that, a vSphere 5 Essentials package for $500 would work without issue.

      Then again, there are the vExpert benefits: http://www.vmware.com/communities/vexpert/

      Also, I purchased 16GB of RAM for each of these hosts before any licensing information was announced.

  3. Hi Jase,

    I recently built myself two new homelab servers as well. I went the AMD way (you can check it out at http://www.vmdamentals.com/?p=2314 ). Very interesting to see how people decide on what it the ultimate whitebox FOR THEM 🙂

    If you get VLAN capable switches, you can run a homelab very well on two NICs (excl. storage NICs) when using VLANs. I use the Linksys SLM2008 for switching extensively. They are relatively cheap fanless 8 port full Gbit switches that support VLANs, jumboframes, port aggregates/etherchannels and much more at a very decent price. Best of all, they just work. If you get two you have enough ports and redundancy as well.

    For routing I am currently looking at the Linksys wRVS4400n, which is a full Gbit security router with wifi and VPN and VLAN capabilities… Should work when you want to route between VLANs. There is also a non-wifi version (RVS4000) which should also work out great now the v2 software is out (I heard terribe stories on the older v1 firmware).

  4. Hi Jase, just two questions. First, how did everything go after you assembled the servers? Did everything perform as expected and allow you to test what you wanted or did you have to make additional purchases to get the functionality you need? Second, What did you decide to do for a SAN, if any) as I assume vmotion is something you were interested in testing as well.

    I am new to vmware outside of the single esxi setup and am looking to build my own lab to venture out into the Enterprise level world of virtualization.

    1. Hey Andrew,

      Everything has gone great, but an additional purchase was required.

      1) When I purchased the equipment, I had missed the EPS12V power requirement, and I did get the boxes up on a single 4 pin ATX12V connector. But I added (mentioned in the blog) a 4-to-8 converter.

      If the equipment had stayed at home, I would have likely purchased new case fans, as the included case fans were 30dba+. Something like this should do: http://www.microcenter.com/single_product_results.phtml?product_id=0341408 (<15dba)

      2) Also, at home I was using an inexpensive Dell PowerConnect 2816 switch ($80 on eBay), but have since moved it to a Cisco switch managed by my co-location provider. For shared storage, I'm using an Iomega IX4-200D Cloud Edition for NFS & iSCSI storage. It isn’t the fastest, but is sufficient. The Cloud Edition adds the ability for me to run a remote client and upload/download (encrypted) over the Internet, perfect, given the fact that it moved to the co-lo along with the hosts.

      Good luck to you with the world of virtualization. As you know, it is very flexible and has transformed the capabilities that IT can deliver.

      Be sure to check out my other post on some corners that could be cut on the setup while still being very functional.

      Cheers,
      Jase

  5. Thanks Jase. I will check out your other posts as well. I have used esxi for a few years now, but as I said, am just now venturing into the multi-host enterprise features. Pretty excited about it too!

    If you don’t mind me asking, why did you choose the Iomega vs one of the more popular choices like Synology, Drobo, or Qnap? I doesn’t look like the performance ratings if the Iomega would hold up to running multiple virtual machines via iscsi across multiple hosts.

    1. I’m using the Iomega, because it was a freebie from Chad Sakac (now my VP) when I was an EMC customer.

      I fortunately didn’t have to purchase it.

      Kind of like on “Monster Garage”: FREEBIE.

  6. Well I can’t think of a better reason than free lol. Are you happy with the performance of the Iomega? Part of what I will be doing is testing replication of virtualized database, web, and email servers and things like vmotion and HA. Just want to make sure whatever I go with will handle the traffic.

    1. From a performance perspective, everything works decent within the expectations of the Iomega.

      Boot storms are an issue upon powering on all of the VMs… I am keeping my nested ESXi hosts on local storage (SSD), and I have the host cache feature of ESXi enabled on my physical hosts.

      When I reboot a Windows XP VM, it typically takes about 1 minute from the time I initiate a reboot, until I can log in again.

      So the performance is acceptable from a usability standpoint.

  7. That is good to hear. I am looking at two to play with vstoragemotion as well as just vmotion and these are the most affordable of all the brands I have seen. Thanks for your help. Your site is great and very beneficial to someone in my predicament

  8. Jase, have you been able to access the iKVM when the servers are powered down, i’m running an almost identical setup in Foxconn DH-839 cases and have been unable to access the iKVM to remote power up via my 2960 switch, when the servers are off i have no lights on any of the 3 onboard NICs.
    I’m running the latest 1.03 BIOS and 2.00 iKVM.

    Apart from the iKVM problem i’m loving them, running XenServer 6.0 from USB. I’ve got another set of USB keys with ESXi 5.0.

    1. Mike,

      I unfortunately cannot access the iKVM when the system is powered off. It appears that the ATX-Flex power supply (despite adapter) does not support this.

      In talking with @vTexan, (http://www.vtexan.com/) he picked different cases/power supplies, and can access his hosts without power applied.

      Other than the iKVM issue, like you, I’m pretty happy with these systems as well.

      Thanks,
      Jase

  9. Hi Jase,

    The Motherboard: Tyan S5510GM3NR is not available from the link, do you have any other suggestion for any other board with similar specification.

    Many thanks,
    Hadi

  10. Any recommendations on a different case/ps combo that will you could turn on and off remotely?

    I assume that it means it needs the EPS12V capability?

    Thanks,

    Jim

    1. Let me ping Tommy Trogden (vtexan.com), as he has the same setup, but different power supplies/cases.

      Thanks,
      Jase

  11. Hi there,

    I think DirectPath would also work with the Intel E3-1235.. or not? Where could I find this out? On which CPU it works?

    Thanks.

  12. Here is what I ended up going with:

    Tyan S5510GM3NR motherboard
    Intel G620 2.6Ghz CPU with 3MB cache
    Kingston KVR1333D3E9S/4G RAM
    Rosewill Case – FBM-01 RTL
    Rosewill PSU – RG530-S12 530W RT

    I purchased 2 of these systems (with 2 of the Kingston RAM each for 8 GB of RAM in each system).

    The only thing is that the case comes with 2 fans – a 120 MM in front and a 80 MM in back, but the power connectors for them are the standard power plugs and not the small 4 pin ones and do not connect up to the motherboard. The motherboard has a very loud alert that happens because it thinks the front fan is not spinning, so I ended up spending $9 on a different front fan for the case so I could hook it up to the motherboard.

    I really really like this motherboard!

    The only thing that I haven’t been able to do is attach an ISO via the IPMI web interface and have it boot off of that – it just wouldn’t work.

    I ended up just attaching a USB DVD drive and installing from that.

    I installed ESXi 5 to a Patriot 4GB USB stick inside the case – there is a nice USB slot right on the board for exactly that purpose! Very nice! ESXi just recognized everything – the USB stick, the NICs…it all just works!

    Jim

  13. I have a SBS (Small Business Server 2003), an Asterisk PBX, an OpsView monitoring appliance, a Linux box running a LAMP application I wrote and a vCenter server all running on a single machine.

    My CPU usage is running about 575 Mhz most of the time.

    I find that you almost always run out of RAM long before you run out of CPU.

    I am using a storage server that I built myself – http://techwithjim.blogspot.com/2010/11/whs-and-norco-build-sort-of.html – using a Norco case with hot swap drives. I run Windows 2008 on it and then utilize the free Starwind Software iSCSI server to provide iSCSI to my vSphere environment.

    I really like the motherboard you picked – excellent find!

    If I need more horsepower, I would go with the other CPU’s you suggested – but I am finding no problems with these ones – a bargin at under $70 each.

    I will probably double the RAM I have in each system (just like you have), and that is very inexpensive to do.

    The other thing I like is that these systems are very quiet – much more so than typical servers.

    Jim

  14. Jase, i’m about ready to purchase this bada** setup, but one question. I have access to a cisco switch to do LACP can I do that w/ the NICs that are onboard or will i have to get better NIC’s to accomplish this?

  15. Hello. I hope this post is still active. Here’s what I’m trying to accomplish, and where your (kindly vetted) post works –
    1. small form factor – micro atx -check
    2. up to 32GB ram per box- check
    3. onboard nic’s – check – mb also supports more if need be
    4. vetted know and working – check.
    thanks –

    what I’m trying to accomplish is –
    using my synology 1812+ box (already purchased and currently being utilized as a DR holding place) as my shared storage. I’d like to mount as a test DR my veeam images and verify a potential DR solution is averted. I realize having dual 6-core boxes @ work with 128Mb ram and a emc san is quite different that using the synology box with smaller hosts..but I should be able to accomplish several things – mount and test some DR enviroments, and have a decent lab to learn and play with. I like the idea of having the Ikvm and the ssd for some advanced caching features but don’t know if I’d use the caching features for awhile. Given that ‘some’ time has expired and keeping with the micro-atx enviroment – are there any changes you’d make to ‘freshen’ up this build in today’s world?

    thanks again –
    mark

    1. Hrm… To freshen it up?

      The only thing I’ve done (which was substantial) was to upgrade the RAM to 32GB.

      Here is the updated post vSphere 5 Lab Hosts – A Year Later.

      And I have to say going from 16GB to 32GB made a huge difference for me.

      If I were rebuilding the lab, I’d likely go with Ivy Bridge procs, rather than Sandy Bridge, but overall, I’ve been very pleased.

      Cheers,
      Jase

  16. wow – really hard to image that all that ‘stuff’ get’s stuffed into such a small case – but since all you have the the MB, proc, mem, and additional nic’s.. guess it works. I’ve started pricing all the parts and the SSD has really dropped in price – down to about $14 now. How have you found the host-cache process to work? I don’t use it at work and don’t know how much I’d use in a a lab, but if you can use the SSD as virtual host-cache, instead of physical memory for VM’s that are running on the system – should I get a larger SSD..to play with?
    In reviewing the subsequent blogs it looks like there were only 2 ‘add-ins’ that you mention – 1 was the power 8-2-4 convertor and the other being the different fans. I presume the fan is to replace the one that ships with the case?
    I’ve been reviewing your build against Chris Wahls … can’t decide on which to select. Space isn’t the issue for me, and they are really similiar..so hmmm….

    1. I built another one with a Core i3-2100, using a different (cheaper/larger) case, and used an EPS12V compliant power supply.

      I didn’t need the additional cable for that one, given I added a compliant PS.

      The Rosewill Slim Case came with a fan, but it was louder than I wanted initially, as the hosts were once at home. I have them at a Co-Lo facility now, so it isn’t a big deal.

      As far as what I’d do differently? I’d probably get another case with room for more fans, as the MB supports 4 or 5 (can’t remember now), and from time to time, I’ll get an alert about a fan not reporting (because it isn’t there).

      There are several folks that have gone with the TYAN, and have been pretty happy. The biggest issue, is to make sure the BIOS is up to date…

      I saw a review on NewEgg’s site stating that 32GB is not supported. That is simply not true. I have 32GB in the 2 E3-1230 hosts, and 16GB in the i3-2100 host (reused RAM).

      As far as the Host Caching, I noticed more when I only had 16GB of RAM on each host. I haven’t really pushed over subscription since I’ve upgraded to 32GB of RAM. (Going to 32GB on each host was a game changer.)

      Cheers,
      Jase

  17. I’m kind of fortunate in that work will be ‘footing’ the larger piece of this puzzle, but I’m trying to build something that’s going to be quiet – so I don’t wake the house @ night while I’m busy working away – and sizing is a secondary concern – as long as it’s either a Micro or mini-atx sized solution. I’d discounted the Tyan boards before because of their vm-compliance issue, but it’s good to know that it’ll work w/o any issues. I’m hoping to provide a DR test with the following vm boxes loaded (1-AD, 1-exchange2k10, File/print, Sql, and a win7 machine) Counting up the physical memory currently implemented, 24Mg – and being able to spread the VM’s accross a couple of boxes should work shouldn’t it? I know I’ll have some compatible issues as the current CPU’s on my VM’s are configured to run with the Cisco C210 dual-6core processors, but shouldn’t I be able to boot the machine, windows detect the change, windows make the change – reboot and get the servers up and running?

    Mark

    1. They should be able to power right on, notice the changes, and then reboot to take the new hardware into account (Windows VMs that is).

  18. Hi again –
    Other that the Ivy being the latest ‘gen’ – I’d kind of discounted them as I thought the slightly lower power and ‘more for graphics’ wouldn’t really play into account for a home lab. Am I missing something? I just got the approval to purchase some home equipment and am slightly confused now – should I use the specs from your 2011 build, or the 2012 build as the base-line for my systems? I’ll be maxing out the system with memory in either case, and with the new requirements of V5.1 (10Gig ram for vcenter,sql,sso, etc) it looks like I’m gonna need that. I’d kind of leaning towards getting a similiar box, but loading it with windows 7/8 – running vmware workstation on it and loading the vcenter (stuff) in there so I dont have to ‘subtract’ the memory from my systems.
    Thoughts … Also, whenever you bought the equipment, and since I haven’t ‘built’ a system in about 6 years – was everything ‘plug and pray’ ??

    mark

    1. sea pro,

      The primary differences between the 2011/2012 builds, are the amount of RAM, and the processor type.

      From a RAM perspective… 32GB would now be a no-brainer for me. Going from 16GB to 32GB made all the difference in the world. With 16GB, I often had to be concerned with utilization, overcommitment, swapping to SSD, HA slot sizes, etc. Now with 32GB I seldom have an issue with any of those.

      From a CPU perspective… To be honest, the last time I looked, the E3-1230v2 (Ivy Bridge) was cheaper (on Newegg) than the E3-1230 (Sandy Bridge). In that situation I’d obviously go with the newer processor.

      Is the lab still viable? Absolutely… Storage is my biggest bottleneck.

  19. Hi –
    do you think this board and cpu would work, it seems to provide ‘all’ requirements of quad-core, vm-directpath, hyper threading, etc …
    maybe the headache will ease up a bit now? Of course I’m spending a little bit more now (less than $50) but shouldn’t i be able to obtain all those features ??

    cpu – http://www.newegg.com/Product/Product.aspx?Item=N82E16819116502
    board – http://www.tyan.com/product_SKU_spec.aspx?ProductType=MB&pid=720&SKU=600000238

    1. I have heard people have had success with i7 procs in the TYAN S55xx boards, but I’m not certain if I would want to go that route.

      I’d rather not rock the boat from a physical device support perspective. Too much of my own money.

  20. Wow – what a change, and now a new exercise.

    I got approval for a couple of boxes to build for a home lab. At the last minute, as I was checking everything on the Vmware side – I realized that for the new 5.1 VSphere enviroment, and to stay as a best-practice, I’m gonna need another 10 gigs of ram to build out that enviroment. From what I can tell – to support multiple enviroments (my lab + work) I’m gonna need to install SSO and a sql instance box – which eats up another 10-gigs from my platform. This, I wasn’t counting on. So the questions are … do i ‘eat’ those resources from my test lab (further degrading any ‘real’ type of DR and test lab enviroment) or just purchase another box to handle those resouces – and manage that via a local install of vmware? The best idea is to purchase another box and manage that ‘locally’ and outside the vmware lab/DR site – but that’s going to cost more $$ and tilt my budgeting.
    Questions –
    1. is this really a need, and do I have to limit myself with these choices (either spend more $$ to have a separate box or degrade from my lab/dr site)?
    2. Is, or has someone found a small platform that will support, maybe up to 64Megs of Ram that I can use in my hosts?

    I still, if I can help it – want to stay as quiet as I can, take up the least amount of footprint that I can – and not have to install another electric circuit to my house to power this monster.

    aaarrrggg !!!!! Help !!!!!

    mark

  21. Have you done anything for 5.5 yet looking to build to take advantage of some of the higher HA and SRM functionality.

    Love vflash at work and want to build out vds with a Cisco switch 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.