Monday, May 26, 2014

KB Datastore renaming fails when the Virtual Infrastructure Client is connected directly to ESX host (1004845)

Datastore renaming fails when the Virtual Infrastructure Client is connected directly to ESX host (1004845)


Details

When the Virtual Infrastructure (VI) Client is connected directly to an ESX host, renaming a datastore fails.

Solution

To rename the datastore:
  1. Log in to the VirtualCenter Server using the Virtual Infrastructure Client.
  2. Select Host Configuration Storage.
  3. Right-click on the datastore and select Rename, or click the datastore name directly.

Request a Product Feature

To request a new product feature or to provide feedback on a VMware product, please visit the Request a Product Feature page.

Tuesday, May 20, 2014

Vmware KB Testing VMkernel network connectivity with the vmkping command (1003728)

Testing VMkernel network connectivity with the vmkping command (1003728)

Purpose

For troubleshooting purposes, it may be necessary to test VMkernel network connectivity between ESX hosts in your environment.

This article provides you with the steps to perform a vmkping test between your ESX hosts.

Resolution

The vmkping command sources a ping from the local VMkernel port.
 
To initiate a vmkping test from the console of an ESX Server host:
  1. Connect to the ESX/ESXi host using an SSH session. For information see Tech Support Mode for Emergency Support (1003677) andUsing Tech Support Mode in ESXi 4.1 and 5.0 (1017910).
  2. In the command shell, run the command:

    # vmkping x.x.x.x
    where x.x.x.x is the hostname or IP address of the server that you want to ping.
  3. If you have Jumbo Frames configured in your environment, run the vmkping command with the -s and -d options.

    # vmkping -d -s 8972 x.x.x.x
    Note: If you have more then one vmkernel port on the same network (such as a heartbeat vmkernel port for iSCSI) then all vmkernel ports on the host on the network would need to be configured with Jumbo Frames (MTU: 9000) too. If there are other vmkernel ports on the same network with a lower MTU then the vmkping command will fail with the -s 8972 option. Here in the command -d option sets DF (Don't Fragment) bit on the IPv4 packet.

  4. In ESXi 5.1, you can specify which vmkernel port to use for outgoing ICMP traffic with the -I option:

    # vmkping -I vmkX x.x.x.x

    Notes: 
    • ICMP response behavior has changed in ESXi 5.1. For more information, see Change to ICMP ping response behavior in ESXi 5.1 (2042189).
    • In releases prior to ESXi 5.1, the host will automatically select the vmkernel port based on the host's vmkernel routing/forwarding table. To display the host's vmkernel routing table, use the esxcfg-route -l command.
    • Verification of your MTU size can be obtained from a SSH session by using
esxcfg-nics -l

Output should be similar to:

# esxcfg-nics -l
Name    PCI           Driver      Link Speed     Duplex MAC Address       MTU    Description
vmnic0  0000:02:00.00 e1000       Up   1000Mbps  Full   00:50:56:17:0a:60 9000   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)
vmnic1  0000:02:01.00 e1000       Up   1000Mbps  Full   00:50:56:17:0a:65 9000   Intel Corporation 82545EM Gigabit Ethernet Controller (Copper)

esxcfg-vmknic -l 

Output should be similar to:

# esxcfg-vmknic -l

Interface  Port Group/DVPort   IP Family IP Address                    Netmask         Broadcast   MAC Address          MTU     TSO     MSS       Enabled Type

vmk1       iSCSI  IPv4      10.10.10.10                                255.255.255.0   10.10.10.255 00:50:56:XX:XX:64    9000    65535     true    STATIC

A successful ping response is similar to:
 
# vmkping 10.0.0.1
PING server(10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=10.245 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.926 ms
--- server ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.926/4.035/10.245 ms
 
An unsuccessful ping response is similar to:
 
# vmkping 10.0.0.2
PING server (10.0.0.2) 56(84) bytes of data.
--- server ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 3017ms

 
Notes:
  • If you see intermittent ping success, this might indicate you have incompatible NICs teamed on the VMotion port. Either team compatible NICs or set one of the NICs to standby.
  • If you do not see a response when pinging by the hostname of the server, initiate a ping to the IP address. Initiating a ping to the IP address allows you to determine if the problem is a result of an issue with hostname resolution. If you are testing connectivity to another VMkernel port on another server remember to use the VMkernel port IP address because the server's hostname usually resolves to the service console address on the remote server.


Vmware Cluster datastore path dead

Symptoms:

  1. have 4 esxi5.1 hosts (esxi01-04) in vmware cluster, VM in two specific datastore cannot start up on esxi01 host
  2. checked the iSCSI Network adapter should be fine as only two datastore have issue
  3. checked the LUN mapping is fine
  4. Try rescan and refresh datatstore didn't solve the issue
  5. Try to disconnect and re-mount the datastore on esxi01 didn't solve the problem . 
  6. For a reason to resume service asap, migrate all VMs to TEMP datastore, unmount and detach the datastore for all esxi host
  7. unmaping LUN on storage side, rescan datastore on vmware cluster the problem LUN should be disappear.
  8. Remap LUN to esxi hosts, add to datasore cluster again, re-signature and format the LUN, put VM on it for testing 

Search through esxi logs and get result below:

standard input)-2014-05-xxT03:09:31.297Z cpu6:8198)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237:NMP device "naa.60a9800037536d72502444xxxxxxxxb" state in doubt; requested fast path state update...
(standard input)-2014-05-xxT03:09:31.594Z cpu4:8401)ALERT: NMP: vmk_NmpVerifyPathUID:1167:The physical media represented by device naa.60a9800037536d72502444xxxxxx (path vmhba32:C0:T0:L13) has changed. If this is a data LUN, this is a critical error. Detect
(standard input)-2014-05-xxT03:09:31.594Z cpu4:8401)WARNING: ScsiDevice: 1422: Device :naa.60a9800037536d72502444xxxxxxx has been removed or is permanently inaccessible.
(standard input):2014-05-xxT03:09:32.410Z cpu4:8196)WARNING: HBX: 1548: HB failed due to no connectivity on [HB state abcdef02 offset 4059136 gen 17 stampUS 1847607014728 uuid 535950b3-29e2ce5d-488a-0025b501011f jrnl <FB 70200> drv 14.58] on vol 'XXXXXXX'
(standard input)-2014-05-xxT03:10:50.461Z cpu10:8202)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237:NMP device "naa.60a9800037536d72502xxxxxxxx" state in doubt; requested fast path state update...
(standard input)-2014-05-xxT03:16:42.380Z cpu15:10515)WARNING: Vol3: 1717: Failed to refresh FS 535daf66-4b90cce6-fa7c-0025b50100ee descriptor: Device is permanently unavailable
(standard input)-2014-05-xxT03:16:42.661Z cpu15:10515)WARNING: Vol3: 1717: Failed to refresh FS 535daf66-4b90cce6-fa7c-0025b50100ee descriptor: Device is permanently unavailable

not sure whether this KB describe the exact issue but at least they got similar symptoms

VMFS Resignature causes thrashing between multiple VMware ESXi 4.x/5.x and ESX 4.x hosts (1026710

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1026710

Unmounting a LUN or detaching a datastore/storage device from multiple VMware ESXi 5.x hosts (2004605)
This article provides steps to unmount a LUN from an ESXi 5.x host, which includes unmounting the file system and detaching the device. These steps must be performed for each ESXi host.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004605

Thursday, May 15, 2014

networking device SIM note

For building a Lab consist of number of networks and cloud related products, i a am looking for some Router softwares which can connect different networks as the real router doing, There are some reference link below:

Cisco CSR1000v For Home Labs
http://www.fryguy.net/2013/12/27/cisco-csr1000v-for-home-labs/

Cisco CSR1000V vs the Fabled IOU
http://lamejournal.com/2013/12/28/cisco-csr1000v-vs-fabled-iou/

How to upgrade Titanium VMware image to 6.1.1

http://51sec.blogspot.hk/2013/03/how-to-upgrade-titanium-vmware-image-to.html