March 29, 2024

NFS performance gotcha: vmnic Autonegotiation, RX, & TX

I recently migrated a production environment off of Fibre Channel over to NFS. For anyone looking to implement either NFS or iSCSI in a vSphere or VI3 environment, I would definitely recommend reading the post A “Multivendor Post” to help our mutual NFS customers using VMware hosted on Chad Sakac’s blog as well as on here Vaughn Stewart’s blog.  It is a very good read, and stresses the point that the “storage network” should be configured appropriately for storage traffic.

Best Practices
The second bullet point in the section Performance consideration #2: Design a “Bet the Business” Ethernet Network talks about enabling flow control.  In the NetApp environment, typically switches are set to receive on and NFS targets are set to transmit on.

I don’t have any EMC equipment, but I do have a NetApp filer.  So I looked at the NetApp Technical Report:  TR3749 NetApp and VMware vSphere Storage Best Practices as a reference in configuring my environment.

Scott Lowe posted an article today on some EMC Celerra Optimizations for VMware on NFS, and is a good read, but I could not find anything related to flow control at the ESX level.  I have an open question with Chad Sakac about recommended flow control settings with EMC storage.

On page 46 of TR-3749 (Section 9.3) the first paragraph reads: Flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from over running a slow receiver. Flow control can be configured ESX servers, FAS storage arrays, and network switches. It is recommended to configure the end points, ESX servers and NetApp arrays with flow control set to “send on” and “receive off.”

Configuring Storage Nics
To configure flow control from the Service Console it is pretty straightforward.  Use ethtool to adjust the flow settings of a physical nic.

The basic syntax to view the current configuration of a vmnic is:

ethtool -a ethX

The syntax to change the configuration of a vmnic is different:

ethtool -A ethX [autoneg on|off] [rx on|off] [tx on|off]

So to change the settings of vmnic2, the syntax would be:

ethtool -A vmnic2 autoneg off rx off tx on

Upon initial setup I configured each of the storage nics with autonegotiation off, receive off, and transmit on.  So my hosts and my NetApp were set to transmit, and my switches were set to receive, per TR-3749.  Performance was awesome, and NetApp filer CPU performance was low as well.  Things looked good.

The Gotcha
It is not unheard of to keep an ESX host up for months or years at a time, so the “gotcha” wasn’t apparent until several months after migrating VM’s from our older FC SAN to the NFS datastores presented by the NetApp.  With about 300 guests at the time of initial setup, watching CPU rise (somewhat) on my filer did not seem strange as I migrated more guests from FC to NFS.

One of my hosts indicated a hardware issue, so I evacuated guests from it, and took it offline.  After careful investigation, and a replacement part, the host was brought back online.  I still didn’t notice my issue at this point.  But the CPU utilization of this host was a little more than it had been in the past, when loaded with the same number of VM’s with about the same workload.

Another a couple hosts needed to be moved from the temporary location they were in to a more permanent location.  Again, I evacuated the VM’s, powered them down, moved them, reran connections, and put the hosts back into service.  Again, the hosts behaved about the same as before, but I still didn’t notice the gain in CPU utilization.

I was looking at my filer, and did notice that the CPU utilization had jumped by about 10% on average.  I did notice that the guests were restarting a little more slowly during the most recent boot storm after a patch window.  Now, no additional VM’s were  added, and no other changes were made to the environment.  The only changes, were that hosts had been rebooted.  Keep in mind it is not uncommon to run 70-80 guests per host for me.

The “gotcha” was that the flow control settings are not persistent after reboots of an ESX host.

Running ethtool -a against all of the vmnics on the moved and rebooted hosts, showed that autonegotiate was not set to autonegotiate off/receive off/transmit on.

The Fix
To ensure that all of my storage vmnics (4 per host) are properly configured, I modified /etc/rc.local to include the appropriate commands upon startup after an ESX reboot.

ethtool -A vmnicW autoneg off rx off tx on
ethtool -A vmnicX autoneg off rx off tx on
ethtool -A vmnicY autoneg off rx off tx on
ethtool -A vmnicZ autoneg off rx off tx on

Now every time a host is booted, the transmit configuration (per TR-3749) is restored.

Note: This also works on ESXi, but will require modifying rc.local using the unsupported “Tech Support Mode.

The Conclusion
After correcting all hosts to reconfigure the “send on” settings on boot, VM’s are much more responsive during boot storms, overall host CPU is lower during normal operation, and the NetApp filer’s CPU utilization is lower as well.

The point to the story, is that initial configurations can be lost on reboot depending on the “stickiness” of the configuration.

4 thoughts on “NFS performance gotcha: vmnic Autonegotiation, RX, & TX

  1. I was thinking of that too. Using NFS rather than using FC or iSCSI. I looked everywhere but there’s no option to set the ethtool in the vmnic config file, so I guess it’ll have to go into /etc/rc.local. I can’t find anywhere to set the duplex/negotiation on vmnic via the vClient too. There might be some advanced option to do that but I just couldn’t find it.

  2. Jase,
    Great entry. As I’m only beginning to implement this and haven’t yet rebooted ESX hosts, but I realized the same thing after reading the end of this vmware kb article:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013413

    Of course, it’s the same scenario on the NetApp filers. You need to add the ‘ifconfig ethX flowcontrol send’ command into /etc/rc.

    To clarify, is this a setting you only implement on your storage nics? How about your nics dedicated for vm port groups or vmotion port groups, etc? I’m guessing auto negotiate would be best?
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013413

    1. Good point about ensuring the same on the NetApp filers, not too many reboots there.

      The big reason I chose to address the storage nics, was because of the recommendations from TR3749 recommends these settings for them.

      Hmmm, autonegotiate for the frontend nics… I’ll have to chew on that one. I’ll see what I can dig up when a little free time comes my way.

      Thanks for the additional info!

Leave a Reply to Chakrit Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.