One of the Space Efficiency features of Virtual SAN 6.2 that is available for both All-Flash and Hybrid configurations, is the introduction of Sparse Virtual Swap files. Swap files on Virtual SAN by default, are created with the .vswp 100% reserved. In a thin provisioned/guaranteed capacity perspective, it could be said that they effectively Lazy Zeroed Thick (LZT).
Virtual Swap files (.vswp) are created when a virtual machine doesn’t have a memory reservation equal to the amount of memory the virtual machine is configured to use. In short, a VM with 4GB of RAM configured, with no memory reservation will create a 4GB .vswp file. If a reservation is used, then the .vswp file will be the configured amount of memory minus the reserved amount of memory. The same VM with 4GB of RAM, along with a 2GB reservation, will create a 2GB .vswp file.
I was working with a customer last week, going over the configuration, setup, and requirements of Virtual SAN 6.1 when deploying a 2 node configuration. “Technically” this is a 2 node stretched cluster, comprised by two data nodes and a witness. Really a 1+1+1 configuration.
One of the reasons for the call, was some confusion about the setup, which is fortunately documented in the Virtual SAN 6.1 Stretched Cluster Guide. Cormac Hogan created the initial content, and I took care of a few updates, as well as adding some additional content specific to 2 node configurations, which are common in Remote Office/Branch Office type deployments.
I pointed the customer to the DOM Owner Force Warm Cache setting in the Stretched Cluster guide.
I very often tear down/distress my “lab” environment while working on docs/testing code/trying to replicate issues/etc. I was trying to recreate an issue the other day and decided to replace my vCenter Appliance. I deleted the VCSA, but left some of the other VMs, as they had some services I needed like DNS.
After deploying the a new VCSA, I noticed an error in the Cluster’s Monitor tab, under Virtual SAN, specific to my VSAN objects. The Compliance status for all my old VM’s was “Out of Date.”
I clicked on a single VM, picked my VM storage policy, and selected the VM home and clicked OK. I could have just as easily selected Apply to all. Not hard, but potentially time consuming in the case of a lot of objects.
With the release of Virtual SAN 6.1 in September, Stretched Clusters and 2 Node support was introduced. There has been some general guidance given around sizing some of the bandwidth between sites, as well as between the sites and the witness.
How those bandwidth requirements are calculated hasn’t been publicly available. Site to site bandwidth is based on the number of writes a workload has, while site to witness bandwidth is based on the number of Virtual SAN components on a Virtual SAN datastore.
Working with the Virtual SAN Engineering team and Virtual SAN Product Management, we’ve put together a white paper on how these bandwidth requirements are calculated, as well as some examples.
Download the Virtual SAN Stretched Cluster Bandwidth Sizing Guidance white paper for more information on how to size bandwidth for Stretched Clusters and 2 Node configurations.
Anyone who has ever had to buy infrastructure for remote or branch offices, or very small offices, has been faced with the predicament of choosing the right combination of capacity, performance, availability, and cost. It can be a daunting task, to keep costs down, while providing a capable and resilient platform for remote workers.
What does it really take to service the needs of remote offices?
Maybe a local Domain Controller for authentication, some DNS services, a local proxy of some sort, or possibly a database or two. Before the days of virtualization, any one of a few deployment options could have been used. Maybe a single server with all services installed locally was preferred, or possibly a few servers with services somewhat distributed across them. Then there was challenge of sizing a solution to fit. Doing this was not uncommon, having had to support such a configuration, a couple questions come to mind. Read more…