October 13, 2024

Hey guys, why does this VM have 3 NICs?

Virtualization, as many say, is a journey.  Some people are well down the path, while others are still beginning to understand how it works.  This post is about a situation I recently came across where I had help with the understanding of how things really work.

The Situation
I was performing some routine maintenance on some ESX hosts and some VM’s at one of my many jobs and noticed something a little strange.

Several of the guests had more NICs listed in the guest than vCenter reported. I took a closer look, and I couldn’t believe my eyes.  One of the other admins had configured teamed NICs. Keep in mind, that these guests only utilized a single IP address, without any type of multi-homed configuration.

I thought to myself, “Teamed NICs, in a VM?” How could this be? Using VMware since Workstation 1.0, I had never seen teamed NICs inside the VM.

I had to pose the question “Why do we need teamed NIC’s in a VM?

I got a very quick response of “For redundancy.”  This response was said with a sense of authority and direction.

NIC Teaming
NIC teaming is handled by software provided by the NIC vendor, and is typically used to make two (or more) NICs appear as a single NIC.

For those familiar with VMware products, often times the NIC used in a VM, is a VMware Accelerated NIC.  I wasn’t aware of any software that could do this with the VMware Accelerated NIC.  Upon further examination, the VM’s were configured to use the Intel e1000 NIC, rather than the VMware Accelerated NIC.

After further investigation I found that the admin had downloaded the Intel e1000 NIC drivers, along with the teaming software, from Intel.  This software package provided the ability to team the two NICs.  That’s why the VM reported 3 NICs (2 e1000 + the Team).

Why did the admin team the NICs, redundancy was the answer.  This was because in the traditional physical server configuration, physical NICs were always teamed in this environment, with each NIC connected to a different switch. What was missed, was that the physical NICs were already redundant at the vSwitch level.

In explaining how the vSphere configuration works, we all agreed that it wasn’t necessary to team NICs inside a VM in our configuration.

Further Discussion?
Are there any configurations that you can think of where teamed NICs would be needed?

15 thoughts on “Hey guys, why does this VM have 3 NICs?

  1. Hello,

    I think more bandwidth is a good reason.
    Not for SAN trafic because you use MPIO which doesn’t require a team, but I havé some Veeam VM which could use more than 1 Gbps.

    However, I’m not sure how to set up an active/active team inside a VM or if it’s even

  2. Not only do I not really see a benefit to this but there could actually be an unexpected drawback as well.

    Take a look at this thread on the VMTN forums regarding in-guest multipathing for iSCSI:

    http://communities.vmware.com/message/1494252#1494252

    In it they describe a scenario where a pNIC fails and the guest isn’t aware of it so it sends traffic down what is essentially a dead path. It eventually recovers but not quickly enough for applications like Exchange or SQL to remain online. The issue is caused by the fact that when a physical NIC in a vSwitch fails, the link status in the guest never changes and the vNIC stays online (or so it thinks). You won’t see it say “network cable disconnected” in the guest since ESX abstracts that away from the virtual machine as it should.

    The workaround described is to use NICs on two different vSwitches. Not ideal, it complicates the networking, and may not even be an option for blades where NICs aren’t exactly at a premium.

    Granted this is for multipathing where traffic flows down both NICs all the time whereas NIC teaming may just be for redundancy (and therefore one NIC remains unused). That said it could still be an issue and is another reason to avoid NIC teaming in the guest.

    Good post. I’d be curious to hear if others have used NIC teaming in a guest and if so why..

    Matt

  3. We use multiple NICs in a VM for the sole purpose of multihoming, when mounting storage to tier1 apps directly in VM’s.

    For example, Exchange. I use 1 E1000 for user traffic (w/ 4 vmnic’s in the vSwitch), and 1 VMXNET3 to mount 10GbE iSCSI storage (with 2 10GbE vmnic’s in the vSwitch). I just put that vNIC in a different Port Group piggybacking on the VMkernel that the host uses to mount the datastores over 10GbE. This works for NFS to Oracle apps/db as well.

    1. Well, let me do a little more clarification…

      These VM’s are configured in a pretty basic/standard configuration. No mounting of storage from within the VM.

      That’s kind of why I thought it was odd.

      Your configuration is a “little” different than the one I’m talking about.

      Thanks for the input!
      Jase

  4. i often team nics inside a vm, and my reason is always that i’d like to double up on the bandwidth available to the box. this is especially the case with file servers, iscsi targets and/or machines involved in lvs clusters and/or drbd setups. as far as you’re concerned, am i wasting my time?

    1. Are you wasting your time? It would really depend on a couple more factors.
      How much data is your file server “serving?”
      How are your physical nics configured?
      How are your switches configured?

      There are some ways to configure your VM/physical nics/switches to get more bandwidth if done right.
      I would first start by looking at the overall utilization of file services on the VM to see if it is truly warranted.

  5. hello,

    I use three vNICs but not teaming, I use for Load Balancing. I have ISA Servers under Multicast Load Balancing that consist of three ISA Servers Arrays.

    I use one for Internal, one for External and one for Intra-Array communication.

  6. @@that1guynick
    Yeah, but that’s not really NIC teaming. What you’re speaking of is done all the time in typical “front-end,” LAN facing networks and “back-end,” storage networks.

    I’m still trying to answer the same question: Is traditional NIC teaming ever warranted inside a VM? Typical scenarios we’re talking about are enterprise-class virtualization stacks running modern hypervisors, such as vSphere 4/5 with multiple 1Gb and 10Gb physical NICs/CNAs.

    Obviously, redundancy is not a reason to team virtual NICs. A single virtual NIC is never going to fail like we’re used to physical NICs failing. We’ll leave physical redundancy and failover to the VMkernel – not the VM.

    So that leaves an increase in bandwidth. Does NIC teaming inside a VM actually increase the bandwidth available to that VM? I don’t know. I have yet to find reliable documentation on it and I haven’t tested it in a lab.

    All the best,

    Mike Brown
    http://VirtuallyMikeBrown.com
    https://twitter.com/#!/VirtuallyMikeB
    http://LinkedIn.com/in/michaelbbrown

  7. I’ve come across a similar situation where i’m building a VM Guest Machine with Windows 2008 R2 on it and 2 Physical Ports mapped to it with a virtual switch from a ESXI5 host.

    Now i see 2 adapter showing up in the guest machines – where as i just need to assign one IP .

    I’ve teamed the 2 adapter from the esxi end – but now how do i go about assigning one IP to 2 adapters that appear in the guest machine?

    thanks,
    vikram

    1. vikram,

      Why would you need to present 2 virtual nics to the VM? If both nics are attached to the same virtual switch, there should be redundancy at the virtual switch.

      I’m somewhat confused by the requirement of 2 nics inside the VM.

      Jase

  8. Your right – i was following the standard physical machine protocol. My Bad.

    Realized after thinking about it with my eyes closed. 🙂

    Thanks so much for the article, sure got my noob head around the concept.

    Cheers
    vikram

  9. Hello, I’m facing same situation…
    I have a guest with AV software installed on it. It just have 1vmnic with vmxnet3 to it.
    The vmware infrastructure is esxi4.1. This host has 4 pnic connected on vswitch std.
    I can’t use vmware teaming based on IP due the physical switch restrictions. So I’m using the vmware standard teaming (based on mac).
    The vm guest is receiving and sending a lot of traffic with the thousands clintes..and it’s facing constrains on the network communication.. The mem, proc and disk are ok, but the updates are taking longer to be finishing. We’re seeing high the network using..

    I want to put one more vnic on the guest, but still keep one only IP to the vm. Would be nice…
    Don’t Know how confiure the virtual guest teaming.. I would be thanks if some one get this tip here..

    Edson

    1. Edson,

      In the blog post, the method used was to download the Intel e1000 drivers from Intel (not the native VMware drivers) and team the nics from within the VM.

      If you are updating thousands of clients, it might be a better solution to scale out the number of AV servers you have.

      Your mileage may vary, but I would personally scale out your AV solution.

      Cheers,
      Jase

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.