Some useful points and to remind my self when deploy vsphere5.5 and configure network:
Networking Best Practices
Isolate from one another
the networks for host management, vSphere vMotion, vSphere FT, and so on,
to improve security and
performance.
Assign a group of virtual
machines on a separate physical NIC. This separation allows for a portion of
the total networking
workload to be shared evenly across multiple CPUs. The isolated virtual
machines
can then better handle
application traffic, for example, from a Web client.
To physically separate
network services and to dedicate a particular set of NICs to a specific network
service, create a vSphere
Standard Switch or vSphere Distributed Switch for each service. If this is not
possible, separate network
services on a single switch by attaching them to port groups with different
VLAN IDs. In either case,
verify with your network administrator that the networks or VLANs you
choose are isolated from
the rest of your environment and that no routers connect them.
Keep the vSphere vMotion
connection on a separate network. When migration with vMotion occurs,
the contents of the guest
operating system’s memory is transmitted over the network. You can do this
either by using VLANs to
segment a single physical network or by using separate physical networks
(the latter is preferable).
When using passthrough
devices with a Linux kernel version 2.6.20 or earlier, avoid MSI and MSI-X
modes because these modes
have significant performance impact.
You can add and remove
network adapters from a standard or distributed switch without affecting the
virtual machines or the
network service that is running behind that switch. If you remove all the
running hardware, the
virtual machines can still communicate among themselves. If you leave one
network adapter intact, all
the virtual machines can still connect with the physical network.
To protect your most
sensitive virtual machines, deploy firewalls in virtual machines that route
between virtual networks
with uplinks to physical networks and pure virtual networks with no
uplinks.
For best performance, use
VMXNET 3 virtual machine NICs.
- E1000: An emulated version of the Intel 82545EM Gigabit Ethernet NIC. A driver for this NIC is not included with all guest operating systems. Typically Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later include the E1000 driver.
Note: E1000 does not support jumbo frames prior to ESXi/ESX 4.1. - E1000e: This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the "e1000e" vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere 5. It is the default vNIC for Windows 8 and newer (Windows) guest operating systems. For Linux guests, e1000e is not available from the UI (e1000, flexible vmxnet, enhanced vmxnet, and vmxnet3 are available for Linux).
- VMXNET 3: The VMXNET 3 adapter is the next generation of a paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. For information about the performance of VMXNET 3, see Performance Evaluation of VMXNET3 Virtual Network Device. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET 3 network adapter available.
VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems:- 32- and 64-bit versions of Microsoft Windows 7, 8, XP, 2003, 2003 R2, 2008, 2008 R2, Server 2012 and Server 2012 R2
- 32- and 64-bit versions of Red Hat Enterprise Linux 5.0 and later
- 32- and 64-bit versions of SUSE Linux Enterprise Server 10 and later
- 32- and 64-bit versions of Asianux 3 and later
- 32- and 64-bit versions of Debian 4
- 32- and 64-bit versions of Debian 5
- 32- and 64-bit versions of Debian 6
- 32- and 64-bit versions of Ubuntu 7.04 and later
- 32- and 64-bit versions of Sun Solaris 10 and later
Notes:- In ESXi/ESX 4.1 and earlier releases, jumbo frames are not supported in the Solaris Guest OS for VMXNET 2 and VMXNET 3. The feature is supported starting with ESXi 5.0 for VMXNET 3 only. For more information, see Enabling Jumbo Frames on the Solaris guest operating system (2012445).
- Fault Tolerance is not supported on a virtual machine configured with a VMXNET 3 vNIC in vSphere 4.0, but is fully supported on vSphere 4.1.
- Windows Server 2012 is supported with e1000, e1000e, and VMXNET 3 on ESXi 5.0 Update 1 or higher.
Physical network adapters
connected to the same vSphere Standard Switch or vSphere Distributed
Switch should also be
connected to the same physical network.
Configure all VMkernel
network adapters in a vSphere Distributed Switch with the same MTU. When
several VMkernel network
adapters, configured with different MTUs, are connected to vSphere
distributed switches, you
might experience network connectivity problems.
When creating a distributed
port group, do not use dynamic port binding. Dynamic port binding has
been deprecated since ESXi
5.0.
SR-IOV
Availability of Features
- The following features are not available for virtual machines configured with SR-IOV:
- vSphere vMotion
- Storage vMotion
- vShield
- NetFlow
- VXLAN Virtual Wire
- vSphere High Availability
- vSphere Fault Tolerance
- vSphere DRS
- vSphere DPM
- Virtual machine suspend and resume
- Virtual machine snapshots
- MAC-based VLAN for passthrough virtual functions
- Hot addition and removal of virtual devices, memory, and vCPU
- Participation in a cluster environment
- Network statistics for a virtual machine NIC using SR-IOV passthrough
Ref:
http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-networking-guide.pdf
No comments:
Post a Comment