I was working with a customer last week, going over the configuration, setup, and requirements of Virtual SAN 6.1 when deploying a 2 node configuration. “Technically” this is a 2 node stretched cluster, comprised by two data nodes and a witness. Really a 1+1+1 configuration.
One of the reasons for the call, was some confusion about the setup, which is fortunately documented in the Virtual SAN 6.1 Stretched Cluster Guide. Cormac Hogan created the initial content, and I took care of a few updates, as well as adding some additional content specific to 2 node configurations, which are common in Remote Office/Branch Office type deployments.
I pointed the customer to the DOM Owner Force Warm Cache setting in the Stretched Cluster guide.
I very often tear down/distress my “lab” environment while working on docs/testing code/trying to replicate issues/etc. I was trying to recreate an issue the other day and decided to replace my vCenter Appliance. I deleted the VCSA, but left some of the other VMs, as they had some services I needed like DNS.
After deploying the a new VCSA, I noticed an error in the Cluster’s Monitor tab, under Virtual SAN, specific to my VSAN objects. The Compliance status for all my old VM’s was “Out of Date.”
I clicked on a single VM, picked my VM storage policy, and selected the VM home and clicked OK. I could have just as easily selected Apply to all. Not hard, but potentially time consuming in the case of a lot of objects.
With the release of Virtual SAN 6.1 in September, Stretched Clusters and 2 Node support was introduced. There has been some general guidance given around sizing some of the bandwidth between sites, as well as between the sites and the witness.
How those bandwidth requirements are calculated hasn’t been publicly available. Site to site bandwidth is based on the number of writes a workload has, while site to witness bandwidth is based on the number of Virtual SAN components on a Virtual SAN datastore.
Working with the Virtual SAN Engineering team and Virtual SAN Product Management, we’ve put together a white paper on how these bandwidth requirements are calculated, as well as some examples.
Anyone who has ever had to buy infrastructure for remote or branch offices, or very small offices, has been faced with the predicament of choosing the right combination of capacity, performance, availability, and cost. It can be a daunting task, to keep costs down, while providing a capable and resilient platform for remote workers.
What does it really take to service the needs of remote offices? Maybe a local Domain Controller for authentication, some DNS services, a local proxy of some sort, or possibly a database or two. Before the days of virtualization, any one of a few deployment options could have been used. Maybe a single server with all services installed locally was preferred, or possibly a few servers with services somewhat distributed across them. Then there was challenge of sizing a solution to fit. Doing this was not uncommon, having had to support such a configuration, a couple questions come to mind. (more…)
I was talking with a fellow VMware guy, Matt Lydy, at VMworld Barcelona and he brought something to my attention around the Virtual SAN 6.x Health Check Plug-in. Matt is a Technical Account Manager in Ohio. I had the pleasure of working the VMware booth with him at VMworld US. He mentioned that the Health Check plugin requires Dynamic Resource Scheduling (DRS). I was pretty sure this wasn’t correct, but upon further conversation, he specified that DRS is required for the automatic installation. At that point a lightbulb went off. It certainly does. (more…)