Tuesday, November 26, 2013

Infiniband note ,lab and reference


have read the great article and experience sharing for an Infiniband Simple Lab architecture from Vladan's blog  here

some highlight for the article:
  • implement 20GBs Infiniband (IB) connection with a little cost
  • support Vsphere  5.x but need use non-official driver thus it should not be used in PROD environment.
  •  It support windows server up to 2008R2 so cannot use the advanced features of  hyper-v 2012R2 server ><
  • could test the RDMA Remote direct memory access performance
  • difficult to scale out as you may need a expensive Infiniband switch 
ref:
Infiniband @ home: your homelab to 20Gbps   http://www.hypervisor.fr/?p=4662


Are InfiniBand's Days Numbered?

Mellanox ConnectX-3-based InfiniBand and Ethernet Adapters for IBM System x


igure 2 shows the Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter (the shipping adapter includes a heatsink over the ASIC but the figure does not show this heatsink)

Figure 2. Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x with 3U bracket (required ASIC heatsink not shown)


Features

The Mellanox Connect X-3 10GbE Adapter has the following features:
  • Two 10 Gigabit Ethernet ports
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • SR-IOV (16 Virtual Functions) supported by KVM
  • Enables Low Latency RDMA over Ethernet (supported with both non-virtualized and SR-IOV enabled virtualized servers)
  • TCP/UDP/IP stateless offload in hardware
  • Traffic steering across multiple cores
  • Intelligent interrupt coalescence
  • Industry-leading throughput and latency performance
  • Software compatible with standard TCP/UDP/IP stacks
  • Legacy and UEFI PXE network boot support

The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:
  • Dual QSFP ports supporting FDR-14 InfiniBand or 40 Gb Ethernet
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
  • Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and 10/40 Gb Ethernet. Supports three configurations:
    • 2 ports InfiniBand
    • 2 ports Ethernet
    • 1 port InfiniBand and 1 port Ethernet
  • SR-IOV (16 Virtual Functions) supported by KVM (Ethernet mode only)
  • Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV enabled virtualed servers)
  • High performance/low-latency networking
  • Sub 1 µs InfiniBand MPI ping latency
  • Support for QSFP to SFP+ for 10 GbE support
  • Traffic steering across multiple cores
  • Legacy and UEFI PXE network boot support (Ethernet mode only)

4K Video Drives New Demands
Bandwidth and network ports required for Uncompressed 4K & 8K video
media_diagram


Tuesday, November 12, 2013

ESXi 5.0/5.1 does not support Windows server 2012 R2/Windows 8.1 : (

1. The ESXi 5.0 doesn't support Windows server 2012 R2, you will see the error screen when boot up with the installation ISO image:

"Your PC ran into a problem and needs to restart. We're just collecting some error info, and then we'll restart for you."
This error will also appear in other certain situation, e.g. improper shutdown, restart,or some system files are deleted.But this time is another case, the OS version is not supported in current ESXi version. :(
 
















 2. And you will not found find Win2012 R2 in  OS version when create a VM as it is not on the office support list,the Compatibility Guide shows that the ESXi5 requires at least Update  to run Win2012 R2(the requirement also apply for Wndows 8.1)

  
3. download the esxi5.0 offline update bundle from the vmware support site(sure u will need an vmware account) The esxcli software vib usage detail is here

# esxcli software vib update -d "/vmfs/volumes/51e5245e-45f7c8c0-bdc0-6c3be50dafb0/ISO/update-from-esxi5.0-5.0_update03.zip" 


4. Installation Result:

Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

Reboot Required: true

 VIBs Installed: VMware_bootbank_ehci-ehci-hcd_1.0-3vmw.500.1.11.623860, VMware_bootbank_esx-base_5.0.0-3.41.1311175, VMware_bootbank_esx-tboot_5.0.0-2.26.914586, VMware_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.500.2.26.914586, VMware_bootbank_misc-drivers_5.0.0-3.41.1311175, VMware_bootbank_net-be2net_4.0.88.0-1vmw.500.0.7.515841, ....

5. The update process needs a reboot, you will see the new added OS version in the Create  after reboot  
6.  For the Lab Workstation, Hp z420 is freezing on  when initializing ACPI after reboot , and the similar issue has been discussed in the communities .

I have figured out quickly it is BIOS related issue as another z420 Workstation with higher BIOS version, had no issue for the patch update. so we need to update the BIOS  firmware to the latest as well.

To update the BIOS firmware of HP workstation, first download the latest firmware from the HP support site . HP provide several ways to update the BIOS and i use  “Flash System ROM” feature.
  • Download the firmware package and extract it in a Windows P
  • copy the BIN file located at C:\SWSETUP\SP63328\DOS Flash\J61_XXX.BIN to USB.
  • Boot the workstation ,use Computer setup (F10) and then select the “Flash System ROM” feature, the BIN file will be detected. 
7. Note above steps are for win2012 R2 Guest OS on ESXi5.0, it also apply for ESXi5.1 and Windows 8.1 Guest OS.






Wednesday, November 6, 2013

Enhanced vMotion Compatibility (EVC)

Enable EVC on an Existing Cluster


When putting esx hosts with different CPU family in a cluster , the EVC - Enhanced vMotion Compatibility should be enable to improve CPU compatibility between hosts. EVC hide CPU features in order to let vMotion compatible for servers with different CPU models.
Procedure
  • If virtual machines are running on hosts that have feature sets greater than the EVC mode you intend to enable, ensure that the cluster has no powered-on virtual machines.
  • Power off all the virtual machines on the hosts with feature sets greater than the EVC mode,
  • Migrate the cluster’s virtual machines to another host using vMotion.
  • Because these virtual machines are running with more features than the EVC mode you intend to set, power off the virtual machines to migrate them back into the cluster after enabling EVC :
    • "The host cannot be admitted to the cluster's current Enhanced vMotion Compatibility mode. Powered-on or suspended virtual machines on the host may be using CPU features hidden by that mode."
  • Ensure that the cluster contains hosts with  from either Intel or AMD CPUs.
  • Any virtual machines running with a larger feature set than the EVC mode you enabled for the cluster must be powered off before they can be moved back into the cluster.
EVC Mode
Virtual Machine Power Action
Raise the EVC mode to a CPU baseline with more features.
Running virtual machines can remain powered on. New EVC mode features are not available to the virtual machines until they are powered off and powered back on again. A full power cycling is required. Rebooting the guest operating system or suspending and resuming the virtual machine is not sufficient.
Lower the EVC mode to a CPU baseline with fewer features.
Power off virtual machines if they are powered on and running at a higher EVC Mode than the one you intend to enable.

The Enhanced vMotion Compatibility (EVC) processor support could be found in :
Description of Intel EVC Baselines

EVC Level EVC Baseline Description
L0 Intel® "Merom" Gen. (Intel® Xeon® Core™ 2) Applies baseline feature set of Intel® "Merom" Generation (Intel® Xeon® Core™ 2) processors to all hosts in the cluster.
L1 Intel® "Penryn" Gen. (formerly Intel® Xeon® 45nm Core™ 2) Applies baseline feature set of Intel® "Penryn" Generation (Intel® Xeon® 45nm Core™ 2) processors to all hosts in the cluster.
Compared to the Intel® "Merom" Generation EVC mode, this EVC mode exposes additional CPU features including SSE4.1.
L2 Intel® "Nehalem" Gen. (formerly Intel® Xeon® Core™ i7) Applies baseline feature set of Intel® "Nehalem" Generation (Intel® Xeon® Core™ i7) processors to all hosts in the cluster.
Compared to the Intel® "Penryn" Generation EVC mode, this EVC mode exposes additional CPU features including SSE4.2 and POPCOUNT.
L3 Intel® "Westmere" Gen. (formerly Intel® Xeon® 32nm Core™ i7) Applies baseline feature set of Intel® "Westmere" Generation (Intel® Xeon® 32nm Core™ i7) processors to all hosts in the cluster. Compared to the Intel® "Nehalem" Generation mode, this EVC mode exposes additional CPU features including AES and PCLMULQDQ.
Note: Intel® i3/i5 Xeon® Clarkdale Series processors that do not support AESNI and PCLMULQDQ cannot be admitted to EVC modes higher than the Intel® "Nehalem" Generation mode.
L4 Intel® "Sandy Bridge" Generation Applies baseline feature set of Intel® "Sandy Bridge" Generation processors to all hosts in the cluster. Compared to the Intel® "Westmere" Generation mode, this EVC mode exposes additional CPU features including AVX and XSAVE.

Note: Intel® "Sandy Bridge" processors that do not support AESNI and PCLMULQDQ cannot be admitted to EVC modes higher than the Intel® "Nehalem" Generation mode.
L5 Intel® "Ivy Bridge" Generation Applies baseline feature set of Intel® "Ivy Bridge" Generation processors to all hosts in the cluster. Compared to the Intel® "Ivy Bridge" Generation EVC mode, this EVC mode exposes additional CPU features including RDRAND, ENFSTRG, FSGSBASE, SMEP, and F16C.

Note: Some Intel® "Ivy Bridge" processors do not provide the full "Ivy Bridge" feature set. Such processors cannot be admitted to EVC modes higher than the Intel® "Nehalem" Generation mode.
In vCenter Server 5.1 and 5.5, the Intel® "Ivy Bridge" Generation option is only displayed in the Web Client.


Before you enable EVC on an existing cluster, ensure that the hosts in the cluster meet the requirements listed in EVC Requirements for Hosts.



The EVC option in vSphere 5.5 including the "Haswell" CPU series support

Intel EVC Baselines supported in vCenter Server releases

EVC Cluster Baseline
vCenter Server Release Intel® "Merom" Generation Intel® "Penryn" Generation Intel® "Nehalem" Generation Intel® "Westmere" Generation Intel® "Sandy Bridge" Generation Intel® "Ivy Bridge" Generation
VirtualCenter 2.5 U2 and later updates Yes No No No No No
vCenter Server 4.0 Yes Yes Yes No No No
vCenter Server 4.0 U1 and later updates Yes Yes Yes Yes No No
vCenter Server 4.1 Yes Yes Yes Yes No No
vCenter Server 5.0 Yes Yes Yes Yes Yes No
vCenter Server 5.1 Yes Yes Yes Yes Yes Yes
vCenter Server 5.5 Yes Yes Yes Yes Yes Yes*
*The EVC option in vSphere 5.5 including the "Haswell" CPU series support