Tuesday, December 17, 2013

Install / Upgrade VMware ESXi 5.5.0 GA from ESXi with USB drive on HPz420

The background: 
As the workstation HP z420 is not in he HCL of the vsphere5.5 and I find that the network adapter is not recognized if installing the ESXi 5.5 host with the installation image from vmware. This error interrupt the installation process and it could not be skipped: 


"No network adapters were detected. Either no network adapters are physically connected .."


1. Try using the HP Custom Image for ESXi 5.5.0 GA Install CD-September 2013 -here to install the ESXi 5.5 but the network adapter still not to be detected. 

2. The HP z420 come with Intel 82579LM nic on the X79 motherboard, we need use ESXi-Customizer to inject the 82579LM nic driver into the installation image, first download files in follow links:

ESXi-Customizer http://www.v-front.de/p/esxi-customizer.html
VIB file net-e1000e-2.1.4.x86_64.vib  from http://ftp2.pl.freebsd.org/pub/VM/VMware/Drivers/net-e1000e-2.1.4.x86_64.vib
ESXi5.5 image from vmware download

3. The image marking process is straight forward, just prepare those files and click run 












4. After the customized image file is ready, we convert it into usb boot image with UNettbootin  
run the tool as administrator


* The usb drive should be erase first to ensure no other boot files exist

















5. Use the USB to boot the workstation.











6. Enter to continue






















7. Press F11 to accept the EULA and continue 










8. For upgrade , we can use F1 to view the disk detail and get where the system disk is 











9. Press F11 for Upgrade











10. Select  "Upgrade ESXi , preserve VMFS datastore"
























11. Wait the upgrade complete and restart the server 













12. Include the BIOS information as reference 

Thursday, December 12, 2013

SQL 2012SP1 installation error Error code 0x84B20001

have error found when install MS SQL 2012SP1, the source downloaded from here
the error message is :
The patch file cannot be opened. The file is: D:\PCUSOURCE\x64\setup\sql_engine_core_inst_msi\sql_engine_core_inst.msp.
Error code 0x84B20001

* where D is your CD ROM drive letter.

Google it with the Error code and found the post in MSDN:

http://social.msdn.microsoft.com/Forums/silverlight/en-US/18806293-ac9a-472e-aaba-5d862ad4b857/sql-server-enterprise-core-2012-sp1-setup-failure

Tried to extract the whole ISO and the server local drive but still fail, Re-download the ISO file solve the issue. The Tech net download site seems not stable as before !! ب_ب 

esx and vcenter log

This two vmware kb describe the log file location for both esxi5 and vCenter5.x







Wednesday, December 11, 2013

vSphere 5.5 Networking



Some useful points and to remind my self when deploy vsphere5.5 and configure network:

Networking Best Practices

Isolate from one another the networks for host management, vSphere vMotion, vSphere FT, and so on,
to improve security and performance.
Assign a group of virtual machines on a separate physical NIC. This separation allows for a portion of
the total networking workload to be shared evenly across multiple CPUs. The isolated virtual machines
can then better handle application traffic, for example, from a Web client.

To physically separate network services and to dedicate a particular set of NICs to a specific network
service, create a vSphere Standard Switch or vSphere Distributed Switch for each service. If this is not
possible, separate network services on a single switch by attaching them to port groups with different
VLAN IDs. In either case, verify with your network administrator that the networks or VLANs you
choose are isolated from the rest of your environment and that no routers connect them.

Keep the vSphere vMotion connection on a separate network. When migration with vMotion occurs,
the contents of the guest operating system’s memory is transmitted over the network. You can do this
either by using VLANs to segment a single physical network or by using separate physical networks
(the latter is preferable).

When using passthrough devices with a Linux kernel version 2.6.20 or earlier, avoid MSI and MSI-X
modes because these modes have significant performance impact.

You can add and remove network adapters from a standard or distributed switch without affecting the
virtual machines or the network service that is running behind that switch. If you remove all the
running hardware, the virtual machines can still communicate among themselves. If you leave one
network adapter intact, all the virtual machines can still connect with the physical network.

To protect your most sensitive virtual machines, deploy firewalls in virtual machines that route
between virtual networks with uplinks to physical networks and pure virtual networks with no
uplinks.

For best performance, use VMXNET 3 virtual machine NICs.



  • E1000: An emulated version of the Intel 82545EM Gigabit Ethernet NIC. A driver for this NIC is not included with all guest operating systems. Typically Linux versions 2.4.19 and later, Windows XP Professional x64 Edition and later, and Windows Server 2003 (32-bit) and later include the E1000 driver.

    Note: E1000 does not support jumbo frames prior to ESXi/ESX 4.1.
  • E1000e: This feature emulates a newer model of Intel Gigabit NIC (number 82574) in the virtual hardware. This is known as the "e1000e" vNIC. e1000e is available only on hardware version 8 (and newer) virtual machines in vSphere 5. It is the default vNIC for Windows 8 and newer (Windows) guest operating systems. For Linux guests, e1000e is not available from the UI (e1000, flexible vmxnet, enhanced vmxnet, and vmxnet3 are available for Linux).
  • VMXNET 3: The VMXNET 3 adapter is the next generation of a paravirtualized NIC designed for performance, and is not related to VMXNET or VMXNET 2. It offers all the features available in VMXNET 2, and adds several new features like multiqueue support (also known as Receive Side Scaling in Windows), IPv6 offloads, and MSI/MSI-X interrupt delivery. For information about the performance of VMXNET 3, see Performance Evaluation of VMXNET3 Virtual Network Device. Because operating system vendors do not provide built-in drivers for this card, you must install VMware Tools to have a driver for the VMXNET 3 network adapter available.

    VMXNET 3 is supported only for virtual machines version 7 and later, with a limited set of guest operating systems:

    • 32- and 64-bit versions of Microsoft Windows 7, 8, XP, 2003, 2003 R2, 2008, 2008 R2, Server 2012 and Server 2012 R2
    • 32- and 64-bit versions of Red Hat Enterprise Linux 5.0 and later
    • 32- and 64-bit versions of SUSE Linux Enterprise Server 10 and later
    • 32- and 64-bit versions of Asianux 3 and later
    • 32- and 64-bit versions of Debian 4
    • 32- and 64-bit versions of Debian 5
    • 32- and 64-bit versions of Debian 6
    • 32- and 64-bit versions of Ubuntu 7.04 and later
    • 32- and 64-bit versions of Sun Solaris 10 and later

    Notes:
    • In ESXi/ESX 4.1 and earlier releases, jumbo frames are not supported in the Solaris Guest OS for VMXNET 2 and VMXNET 3. The feature is supported starting with ESXi 5.0 for VMXNET 3 only. For more information, see Enabling Jumbo Frames on the Solaris guest operating system (2012445).
    • Fault Tolerance is not supported on a virtual machine configured with a VMXNET 3 vNIC in vSphere 4.0, but is fully supported on vSphere 4.1.
    • Windows Server 2012 is supported with e1000, e1000e, and VMXNET 3 on ESXi 5.0 Update 1 or higher.


Physical network adapters connected to the same vSphere Standard Switch or vSphere Distributed
Switch should also be connected to the same physical network.

Configure all VMkernel network adapters in a vSphere Distributed Switch with the same MTU. When
several VMkernel network adapters, configured with different MTUs, are connected to vSphere
distributed switches, you might experience network connectivity problems.

When creating a distributed port group, do not use dynamic port binding. Dynamic port binding has
been deprecated since ESXi 5.0.

SR-IOV

Availability of Features

  • The following features are not available for virtual machines configured with SR-IOV:
  • vSphere vMotion
  • Storage vMotion
  • vShield
  • NetFlow
  • VXLAN Virtual Wire
  • vSphere High Availability
  • vSphere Fault Tolerance
  • vSphere DRS
  • vSphere DPM
  • Virtual machine suspend and resume
  • Virtual machine snapshots
  • MAC-based VLAN for passthrough virtual functions
  • Hot addition and removal of virtual devices, memory, and vCPU
  • Participation in a cluster environment
  • Network statistics for a virtual machine NIC using SR-IOV passthrough




Ref:

http://pubs.vmware.com/vsphere-55/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-55-networking-guide.pdf

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1001805

Tuesday, December 3, 2013

VMware Paravirtual SCSI (PVSCSI) adapters and I/O Performance note

In the VMware document:

Paravirtual SCSI (PVSCSI) controllers are high performance storage controllers that can result in greater throughput and lower CPU use. PVSCSI controllers are best suited for high-performance storage environments.

official testing result stated that
"The test results show that PVSCSI is better than LSI Logic, except under one condition--the virtual machine is performing less than 2,000 IOPS and issuing greater than 4 outstanding I/Os. This issue is fixed in vSphere 4.1, so that the PVSCSI virtual adapter can be used with good performance, even under this condition.
The CPU utilization difference between LSI and PVSCSI at hundreds of IOPS is insignificant. But at larger numbers of IOPS, PVSCSI can save a lot of CPU cycles."

In Aug 26, 2011 ,Chethan Kumar published Achieving a Million I/O Operations per Second from a Single VMware vSphere 5.0 Host 


Results obtained from performance testing done at EMC lab show that:
  • A single vSphere 5 host is capable of supporting a million+ I/O operations per second.
  • 300,000 I/O operations per second can be achieved from a single virtual machine.
  • I/O throughput (bandwidth consumption) scales almost linearly as the request size of an I/O operation increases.
  • I/O operations on vSphere 5 systems with Paravirtual SCSI (PVSCSI) controllers use less CPU cycles than those with LSI Logic SAS virtual SCSI controllers.
The details to Configure disks to use VMware Paravirtual SCSI (PVSCSI) adapters here

 Limitations are:
  • Hot add or remove requires a bus rescan from within the guest operating system.
  • Disks on PVSCSI controllers might not experience performance gains if they have snapshots or if memory on the ESXi host is over committed.
  • If you upgrade your Linux virtual machine to an unsupported kernel, you might not be able to access data on the disks attached to a PVSCSI controller. To regain access to such disks, you can run vmware-config-tools.pl with the kernel-version parameter to regain access.
  • MSCS clusters are not supported.
  • PVSCSI controllers do not support boot disks, the disk that contains the system software, on Red Hat Linux 5 virtual machines. Attach the boot disk to the virtual machine by using any of the other supported controller types. 
KB about The large-scale workloads with intensive I/O patterns require adapter queue depths greater than the Paravirtual SCSI (PVSCSI) default values. Current PVSCSI queue depth default values are 64 ( for device) and 254 (for adapter). You can increase PVSCSI queue depths to 256 (for device) and 1024 (for adapter) inside a Windows virtual machine. here

support matrix for use of PVSCSI for data disks and boot disks


Guest operating systemData DiskBoot Disk
Windows Server 2012 (64 bit only)ESXi 5.0 Update 1, ESXi 5.1ESXi 5.0 Update 1, ESXi 5.1
Windows Server 2008 R2 (64 bit only)ESX/ESXi 4.0 Update 1, ESX/ESXi 4.1, ESXi 5.xESX/ESXi 4.0 Update 1, ESX/ESXi 4.1, ESXi 5.x
Windows Server 2008 (32 and 64 bit)
ESX/ESXi 4.x, ESXi 5.x
ESX/ESXi 4.0 Update 1, ESX/ESXi 4.1, ESXi 5.x
Windows Server 2003 (32 and 64 bit)
ESX/ESXi 4.x, ESXi 5.x
ESX/ESXi 4.x, ESXi 5.x
Windows 7 (32 and 64 bit)ESX/ESXi 4.1, ESXi 5.xESX/ESXi 4.1, ESXi 5.x
Windows Vista (32 and 64 bit)ESX/ESXi 4.1, ESXi 5.xESX/ESXi 4.1, ESXi 5.x
Windows XP (32 and 64 bit)ESX/ESXi 4.1, ESXi 5.xESX/ESXi 4.1, ESXi 5.x
Red Hat Enterprise Linux (RHEL) 5 (32 and 64 bit) and all update releases
ESX/ESXi 4.x, ESXi 5.x
Not Supported
RHEL 6 (32 and 64 bit)
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
SUSE Linux Enterprise 11 SP1(32 and 64 bit) and later releases
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
Ubuntu 10.04 (32 and 64 bit) and later releases
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
ESX/ESXi 4.0 Update 2, ESX/ESXi 4.1, ESXi 5.x
Distros using Linux version 2.6.33 or later and that include the vmw_pvscsi driver
ESX/ESXi 4.1, ESXi 5.x
ESX/ESXi 4.1, ESXi 5.x

ref:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398 


When To Use VMware PVSCSI (And When Not To)

http://virtualizationreview.com/Blogs/Virtual-Insider/2011/03/When-To-Use-VMware-PVSCSI.aspx