I very often tear down/distress my “lab” environment while working on docs/testing code/trying to replicate issues/etc. I was trying to recreate an issue the other day and decided to replace my vCenter Appliance. I deleted the VCSA, but left some of the other VMs, as they had some services I needed like DNS.
After deploying the a new VCSA, I noticed an error in the Cluster’s Monitor tab, under Virtual SAN, specific to my VSAN objects. The Compliance status for all my old VM’s was “Out of Date.”
I clicked on a single VM, picked my VM storage policy, and selected the VM home and clicked OK. I could have just as easily selected Apply to all. Not hard, but potentially time consuming in the case of a lot of objects.
With the release of Virtual SAN 6.1 in September, Stretched Clusters and 2 Node support was introduced. There has been some general guidance given around sizing some of the bandwidth between sites, as well as between the sites and the witness.
How those bandwidth requirements are calculated hasn’t been publicly available. Site to site bandwidth is based on the number of writes a workload has, while site to witness bandwidth is based on the number of Virtual SAN components on a Virtual SAN datastore.
Working with the Virtual SAN Engineering team and Virtual SAN Product Management, we’ve put together a white paper on how these bandwidth requirements are calculated, as well as some examples.
Anyone who has ever had to buy infrastructure for remote or branch offices, or very small offices, has been faced with the predicament of choosing the right combination of capacity, performance, availability, and cost. It can be a daunting task, to keep costs down, while providing a capable and resilient platform for remote workers.
What does it really take to service the needs of remote offices? Maybe a local Domain Controller for authentication, some DNS services, a local proxy of some sort, or possibly a database or two. Before the days of virtualization, any one of a few deployment options could have been used. Maybe a single server with all services installed locally was preferred, or possibly a few servers with services somewhat distributed across them. Then there was challenge of sizing a solution to fit. Doing this was not uncommon, having had to support such a configuration, a couple questions come to mind. (more…)
I was talking with a fellow VMware guy, Matt Lydy, at VMworld Barcelona and he brought something to my attention around the Virtual SAN 6.x Health Check Plug-in. Matt is a Technical Account Manager in Ohio. I had the pleasure of working the VMware booth with him at VMworld US. He mentioned that the Health Check plugin requires Dynamic Resource Scheduling (DRS). I was pretty sure this wasn’t correct, but upon further conversation, he specified that DRS is required for the automatic installation. At that point a lightbulb went off. It certainly does. (more…)
It has been a while since I’ve posted. Really heads down lately, getting to know my new role. As a refresher, I’ve moved from a Field Support role as a vSpecialist at EMC to a Tech Marketing role at VMware in the Storage and Availability group.
As much as I’d like to say I’m done with drinking from the firehose, I’m really not. I’m focusing on Virtual SAN, and the partner ecosystem that works with it. Lots to learn & lots of folks to work with (internally & externally). It is really different, having the visibility I have now, than before. It is very exciting, and needless to say very busy, with VMworld coming up. (more…)