Friday, November 11, 2016

Nexus FEX Config


How to Set Up Cisco Nexus Fabric Extender

Cisco Nexus Fabric Extenders (FEXs) provide ToR connectivity for Nexus 5000 and 7000 series switches. This 9-step plan shows you how to bring a FEX online, and includes configuration tips and code examples.

Friday, October 31, 2014

Symantec Backup Exec 2014 /BE2014 tech note and licensing




BE2014 will have license below:

SYMC BACKUP EXEC 2012 SERVER WIN PER SERVER BNDL STD LIC EXPRESS BAND S ESSENTIAL 12 MONTHS

SYMC BACKUP EXEC 2012 AGENT FOR WINDOWS WIN PER SERVER BNDL STD LIC EXPRESS BAND S ESSENTIAL 12 MONTHS

SYMC BACKUP EXEC 2012 AGENT FOR VMWARE AND HYPER-V WIN PER HOST SERVER BNDL STD LIC EXPRESS BAND S ESSENTIAL 12 MONTHS

SYMC BACKUP EXEC 2012 AGENT FOR APPLICATIONS AND DATABASES WIN PER SERVER BNDL STD LIC EXPRESS BAND S ESSENTIAL 12 MONTHS

SYMC BACKUP EXEC 2012 OPTION DEDUPLICATION WIN PER SERVER BNDL STD LIC EXPRESS BAND S ESSENTIAL 12 MONTHS

Meaning:
BASIC = 8x5 support for 12 months
ESSENTIAL = 24x7 support for 12 months.

Then you get into the BUNDLE, INITIAL, and RENEWAL differences....

BUNDLE is what you typically see on new purchases of an agent.

RENEWAL..  This is your renewal SKU the following year, to renew maintenance/support/and retain product upgrade eligib


Symantec Backup Exec 2014TM Licensing Guide Also give details in the agent options and different licensing mode:

http://www.symantec.com/content/en/us/enterprise/other_resources/b-backup-exec-2014-licensing-guide-or-21329872.pdf

Friday, October 10, 2014

Netapp Mutistore note

Netapp Mutistore

APPLICATION AND VOLUME LAYOUT
NetApp storage controllers have the ability to further logically partition the available storage into containers called flexible volumes or FlexVol volumes. These FlexVol volumes are carved out of the available aggregates. For isolation and security purposes, these FlexVol volumes can be allocated to virtual storage controllers called vFiler® units. These vFiler units, available by licensing MultiStore®, allow specific datasets to be housed within their own IP spaces. The applications provisioned in this environment are provisioned into vFiler units. Figure 14 details the organization of the deployed applications and their respective volumes.

Figure 14) Base FlexPod unit: Application and volume layout.



Multistore and Vfiler basics

What is Vfiler:

Vfiler: A lightweight Instance of Data ONTAP Multi protocol server and all the system resource are shared b/w Vfiler units.
Storage units in the vfilers are Flexvols and Qtrees
Network Units are IP Address ,VLAN,VIFs,aliases and IPspaces
Vfiler units are not hypervisors –vfiler resource cannot be accessed and discovered by any other vfiler units

Multi store configuration:

Maximum vfiler can be created =64+vfiler0
Vfiler configurations is stored in separate volume/qtrees
Additional storage and n/w resource can be moved, added or deleted
NFS, CIFS, iSCSI, HTTP, NDMP, FTP, FTPS, SSH and SFTP protocols are supported
¡  Protocols can be enabled / disabled per vFiler
¡  Destroying a vFiler does not destroy data


A best practice is to use FlexVols, not qtrees as a base resource
Destroying a vFiler does not destroy the data – volume/qtree resources are moved to vFiler0

Secure  multi-tenancy capability with NetApp, Cisco, and Vmware.
Key Points:
        Cisco, NetApp, and VMware have built the industry’s first, end to end secure multi-tenancy solution.
        Multi-tenancy, which securely separates different applications and data sets on the same infrastructure, is particularly important for HIPAA and other applications that are subject to strict compliance and security regulations.
        A shared infrastructure requires strict isolation between the different tenants that are resident within the infrastructure. The tenants can be different clients, business units, departments or security zones. Previously, customers with a shared cloud infrastructure were able to achieve “pockets” of isolation within the virtual server layer, the network layer, and storage, but never completely end-to-end. Without end-to-end isolation, customers had to spend both money and additional resources to address the issue of isolation and compliance (as it is mandated by some governments), creating inefficiencies across the data center.
        The pre-tested and validated Secure Multi-Tenancy Design Architecture is for customers who have deployed the Cisco Unfied Computing System, Cisco Nexus 7000, 5000 and 1000V Series Switches; NetApp FAS storage with MultiStore software, which creates logical partitions within a storage system; and VMware’s vSphere virtualization software with vShield, another tool that creates secure, logical partitions in virtual systems, and provides details about implementing and configuring the architecture, as well as best practices for building and managing these solutions.
        With this capability, IT can enable different functional departments or business applications to share server, networking, and storage infrastructure in a secure fashion. The same is true for service providers who can now provide secure server, network, and storage partitions across shared hardware. Shared hardware means greater utilization and efficiency along with equipment, operations, and utilities cost savings.
Transition: Another important capability is infrastructure management.

How to Create Virtual Filer (Vfiler OnTap)


What to consider for a vFiler unit participation in an IPspace
There are some guidelines to remember when assigning an IPspace to a vFiler unit.
  • An IPspace can contain multiple vFiler units, however, a vFiler unit can belong only to one IPspace.
  • Each vFiler unit in an IPspace must have an IP address that is unique within that IPspace, but a vFiler unit in one IPspace can have the same IP address as a vFiler unit in a different IPspace.
  • Ensure that you assign an IPspace correctly because once you assign an IPspace to a vFiler unit, you cannot change the assignment without destroying the vFiler unit.
  • Each vFiler unit must have one IP address on the interface that leads to the default gateway of the assigned IPspace. This requirement ensures that the vFiler unit is reachable from within the IPspace.



Monday, October 6, 2014

UCS C220 M3 VIC 1225 and Vsphere note







Cisco Unified Computing System Overview
The Cisco Unified Computing System (Cisco UCS®) is a next-generation data center platform that unites compute, networking, storage access, and virtualization resources in a cohesive system designed to reduce total cost of ownership (TCO) and increase business agility. The system integrates a low-latency, lossless 10 Gigabit Ethernet unified network fabric with enterprise-class blade and rack x86-architecture servers. The system is an integrated, scalable, multichassis platform in which all resources participate in a unified management domain.
Product Overview
A Cisco® innovation, the Cisco UCS Virtual Interface Card (VIC) 1225 (Figure 1) is a dual-port Enhanced Small Form-Factor Pluggable (SFP+) 10 Gigabit Ethernet and Fibre Channel over Ethernet (FCoE)-capable PCI Express (PCIe) card designed exclusively for Cisco UCS C-Series Rack Servers. With its half-height design, the card preserves full-height slots in servers for third-party adapters certified by Cisco. It incorporates next-generation converged network adapter (CNA) technology from Cisco, providing investment protection for future feature releasesThe card enables a policy-based, stateless, agile server infrastructure that can present up to 256 PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards (NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1225 supports Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual machines, simplifying server virtualization deployment.
Figure 1. Cisco UCS VIC 1225
Features and Benefits
Stateless and agile: The personality of the card is determined dynamically at boot time using the service profile associated with the server. The number, type (NIC or HBA), identity (MAC address and World Wide Name [WWN]), failover policy, bandwidth, and quality-of-service (QoS) policies of the PCIe interfaces are all determined using the service profile. The capability to define, create, and use interfaces on demand provides a stateless and agile server infrastructure (Figure 2).
Figure 2. Virtual Device Support on the Cisco UCS VIC 1225
Network interface virtualization: Each PCIe interface created on the VIC is associated with an interface on the Cisco UCS fabric interconnect, providing complete network separation for each virtual cable between a PCIe device on the VIC and the interface on the fabric interconnect (Figure 3).
Figure 3. Cisco UCS VIC 1225 Architecture
Cisco SingleConnect Technology
Cisco® SingleConnect technology provides an exceptionally easy, intelligent, and efficient way to connect and manage computing in the data center. Cisco SingleConnect technology is an exclusive Cisco innovation that dramatically simplifies the way that data centers connect to:
 Rack and blade servers
 Physical servers and virtual machines
 LAN, SAN, and management networks
The solution addresses the challenges of today’s data center, and the result is a simple, intelligent, and efficient fabric:
 Easy: Cisco SingleConnect technology provides a “wire once and walk away” solution that eliminates traditional manual, time-consuming, error-prone processes and instead makes connecting servers to the Cisco Unified Computing System (Cisco UCS®) fast and easy.
 Intelligent: the technology is intelligent because it uses a zero-touch model to allocate I/O connectivity (LAN, SAN, and management) across any type of server: physical rack and blade servers and virtual machines. The network intelligence helps Cisco UCS adapt to the needs of applications. Rather than limiting applications to specific servers, Cisco UCS makes it easy to run any workload on any server.
 Efficient: the technology is highly efficient because LAN, SAN, and management connections are shared over a single network, increasing utilization while reducing the number of moving parts compared to traditional approaches with multiple networks.
Cisco SingleConnect technology is implemented with an end-to-end system I/O architecture that uses Cisco Unified Fabric and Cisco Fabric Extender Technology (FEX Technology) to connect every Cisco UCS component with a single network and a single network layer. As customers expect from Cisco, the Cisco UCS I/O architecture is based on open standards and is reliable, available, and secure.
Cisco Data Center VM-FEX Technology: Cisco Data Center VM-FEX technology extends fabric interconnect ports directly to virtual machines, eliminating software-based switching in the hypervisor. Cisco Data Center VM-FEX technology collapses virtual and physical networking infrastructure into a single infrastructure that is fully aware of the virtual machines’ locations and networking policies (Figure 4). Cisco Data Center VM-FEX technology is implemented by Cisco VICs with a pre-standard implementation of IEEE 802.1BR Port Extender.
Figure 4. Cisco Data Center VM-FEX with Cisco UCS VIC 1225
Table 1 summarizes the main features and benefits of the Cisco UCS VIC 1225.
Table 1. Features and Benefits
Feature
Benefit
x16 PCIe generation-2 interfaces
Delivers greater throughput
2 x 10-Gbps unified I/O
 Delivers 20 Gbps to the server
 Helps reduce TCO by consolidating the overall number of NICs, HBAs, cables, and switches because LAN and SAN traffic run over the same adapter card and fabric
Up to 256 dynamic virtual adapters and interfaces
 Creates fully functional unique and independent PCIe adapters and interfaces (NICs or HBAs) without requiring single-root I/O virtualization (SR-IOV) support from OSs or hypervisors
 Allows these virtual interfaces and adapters to be configured and operated independently, just like physical interfaces and adapters
 Creates a highly flexible I/O environment needing only one card for all I/O configurations
Note: Cisco UCS VIC 1225 hardware is SR-IOV capable, and you can enable SR-IOV after SR-IOV is broadly supported by the popular operating systems. Please refer to UCS Manager configuration limits for your specific OS and environment in the configuration guide.
Cisco SingleConnect technology
A single unified network - the same network brings LAN, SAN, and management connectivity to each server
Cisco Data Center VM-FEX technology
 Unifies virtual and physical networking in a single infrastructure
 Provides virtual machine visibility from the physical network and a consistent network operating model for physical and virtual servers
 Enables configurations and policies to follow the virtual machine during virtual machine migration
 Provides a pre-standard implementation of the IEEE 802.1BR Port Extender standard
Centralized management
Enables the card to be centrally managed and configured by Cisco UCS Manager
Network architecture
Provides a redundant path to the fabric interconnect using hardware-based fabric failover
More than 600,000 I/O operations per second (IOPS)
Provides high I/O performance for demanding applications
Support for lossless Ethernet
Uses Priority Flow Control (PFC) to enable FCoE as part of the Cisco unified fabric
Broad OS and hypervisor support
Supports customer requirements for Microsoft Windows, Red Hat Enterprise Linux, SUSE Linux, VMwarevSphere, and Citrix XenServer
Product Specifications
Table 2 lists the specifications for the Cisco UCS VIC 1225.
Table 2. Product Specifications
Item
Specifications
Standards
 10 Gigabit Ethernet
 IEEE 802.3ae
 IEEE 802.3x
 IEEE 802.1q VLAN
 IEEE 802.1p
 IEEE 802.1Qaz
 IEEE 802.1Qbb
 Pre-standard IEEE 802.1BR
 Jumbo frames up to 9 KB
 Fibre Channel Protocol (FCP)
 Small Computer System Interface (SCSI)-FCP
 T11 FCoE
Components
Cisco UCS custom application-specific integrated circuit (ASIC)
Ports
2 x 10-Gbps FCoE SFP+ ports
Connectivity
PCIe 2.0 x16 form factor
Performance
10-Gbps line rate per port
Management
Cisco UCS Manager Release 2.0 (2) and higher
Number of interfaces
256 virtual interfaces (approximately 8 are reserved for internal use; other factors such as the OS and
hypervisor may limit this number further)
Physical dimensions
Length = 6.6 in (16.76 cm)
Width = 2.5 in. (6.35 cm)
Typical power
12 watts (W)
System Requirements
The Cisco UCS VIC 1225 is designed for use only on Cisco UCS C-Series Rack Servers. Cisco UCS VIC 1225 is supported on Cisco UCS C260 M2, C460 M2, C220 M3, C240 M3, C22 M3, C24 M3, C220 M4, C240 M4, and C460 M4 rack servers. One or more Cisco UCS VIC 1225 cards are supported on these servers depending on the slot configuration. See the server configuration guide for details.

UCSC-PCIE-C10T-02 Cisco VIC 1225T Dual Port 10GBaseT CNA Half





KB: Troubleshooting a black screen when logging into a Horizon View virtual desktop using PCoIP (1028332)

Troubleshooting a black screen when logging into a Horizon View virtual desktop using PCoIP (1028332)

Symptoms

  • When attempting to connect to a Horizon View virtual machine using the PCoIP protocol, a black screen is displayed temporarily and the client disconnects. Connecting to the same virtual machine using RDP protocol is successful.
  • Internal PCoIP connections may be successful, however connecting externally results in a black screen.
  • Connections from a PCoIP Zero Client fail, but connecting using the Horizon View software client is successful.

Monday, September 29, 2014

note about VMotion

vMotion VM between vSphere clusters – vSphere 4.X and vSphere 5.X

Yes, you can migrate VM’s between vSphere clusters (even between different versions) as long as below conditions are met:
  • vSphere clusters must be managed by single vCenter server
  • vSphere clusters must be within single DataCenter object in vCenter server
  • Pre vSphere 5.1 – clusters must have an access to the same datastore.
  • vMotion network is stretched between clusters.
  • Processors must be from the same vendor (Inter or AMD) and family (model) on both  clusters or both clusters have common EVC baseline applied.
  • Virtual Machine hardware version is supported by hypervisor – very useful during migration to new hardware or new version of vSphere platform.
  • If you have vDS implemented, make sure dvportgroup is span across both clusters 



Long Distance VMotion

Requirements:
·         An IP network with a minimum bandwidth of 622 Mbps is required.
·         The maximum latency between the two VMware vSphere servers cannot exceed 5 milliseconds (ms).
·         The source and destination VMware ESX servers must have a private VMware VMotion network on the same IP subnet and broadcast domain.
·         The IP subnet on which the virtual machine resides must be accessible from both the source and destination VMware ESX servers. This requirement is very important because a virtual machine retains its IP address when it moves to the destination VMware ESX server to help ensure that its communication with the outside world (for example, with TCP clients) continues smoothly after the move.
·         The data storage location including the boot device used by the virtual machine must be active and accessible by both the source and destination VMware ESX servers at all times.
·         Access from VMware vCenter, the VMware Virtual Infrastructure (VI) management GUI, to both the VMware ESX servers must be available to accomplish the migration.

Can I migrate running VM between datacenters using vMotion?


https://communities.vmware.com/message/2166733

Thursday, September 11, 2014

VMware NSX Manager 6.0.5 install


1. General error when deploy NSX manager ova on vcenter 5.5 web client 


2.  Search the error and read some vcenter logs but didn't get any related error

3. Deploy use vSphere C# client is working:



Thursday, August 21, 2014

software-defined storage (SDS) note VSAN,Nutanix

Read VMware vSphere Blog article VMware Virtual SAN with Cisco UCS Reference  Architecture
 http://blogs.vmware.com/vsphere/2014/08/vmware-virtual-san-cisco-unified-computing-system-reference-architecture.html
Which brief the integration of VSAN and UCS C240-M3 series server, the whitepaper is information whatever you deploy VSAN with UCS C-series or not.







Nutanix Virtual Computing Platform compared to VMware VSAN 1.0
  • This article mention lot of features of Nutanix and compared to VSAN.
  • It is interesting for comment about the SAN concepts

VSAN
VMware Education-VMware Virtual SAN Fundamentals [V5.5]
http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=55806

Technical Deep Dive -Keynote
SDS on ESXi Hypervisor, x86 servers
Embedded in vSphere kernel
Built-in resiliency
inexpensive disk and ssd
POC cheklist pdf

Nutanix NX-3050 series

This article about the complete Review of Nutanix NX-3050 series by Brian Shur :

http://www.datacenterzombie.com/nutanix-review-nx-3050-series/


  1. Architecture
  2. Ratings and reasons
  3. Cost
  4. Management console Web interface
  5. Upgrade note
  6. Performance of VMware View Planner 
  7. Nutanix DR



Wednesday, August 20, 2014

Cisco Bugid CSCul44421 Chassis seeprom local IO failure error on Switch

Symptom:

Error accessing shared-storage fault.

Problem Details: affected="sys/mgmt-entity-A"
cause="device-shared-storage-IO-error"
changeSet=""
code="E4196535"
created="2013-12-18T08:50:03"
descr="device Fxxxxxxxxxxxx, error accessing shared-storage"
dn="event-log/785673"
id="785673"
ind="state-transition"


From Cisco support :

The error accessing shared-storage fault is not harmful and does not affect system functionalities.  In UCS chassis design, we build in a chip called, SEEPROM, on the backplane. SEEPROM is a permanent memory and used to store cluster database version to avoid the case of cluster database being overwritten by old version when failover happens.

The communication between IO module and SEEPROM is one way. We store the identical SAM DB version in three chassis rather than in one (so called three chassis redundancy). Because the communication between IO module and SEEPROM happens only one way, the error accessing shared-storage fault can happen sometimes - this is system behavior per specification and design. So, as long as one SEEPROM is readable, the UCS works normally.

If the error accessing shared-storage fault is currently in cleared state and does not raise again, do not apply the workaround and do not do anything.

If the error accessing shared-storage fault is raised state and is never cleared, or the fault keeps coming back, try the following workarounds:

* Reboot the IO module.


* If alert does not clear, reseat IO module. Make sure it's firmly seated.

https://tools.cisco.com/bugsearch/bug/CSCul44421

Other workaround mentioned in Cisco support forum:

  • Reboot/Reseat IOM
  • Reboot Chassis

https://supportforums.cisco.com/discussion/11349691/ucs-error-accessing-shared-storage-f0863

Wednesday, July 30, 2014

Backing up ESXi host configuration data

Using the ESXi Command Line

To synchronize the configuration changed with persistent storage, run the command:

vim-cmd hostsvc/firmware/sync_config

To backup the configuration data for an ESXi host, run the command:

vim-cmd hostsvc/firmware/backup_config

Note: The command should output a URL in which a web browser may be used to download the file. The backup file is located in the/scratch/downloads directory as configBundle-<HostFQDN>.tgz

The backup file should be move to the local datastore e.g. /vmfs/volumes/local1/ESXi_Backup/ if you aew noting going to download the file by the URL 



Using the ESXi Command Line:
Note: When restoring configuration data, the build number of the host must match the build number of the host that created the backup file.
  1. Put the host into maintenance mode by running the command:

    vim-cmd hostsvc/maintenance_mode_enter
  2. Copy the backup configuration file to a location accessible by the host and run the command:

    In this case, the configuration file was copied to the host's /tmp directory. For more information, see Using SCP to copy files to or from an ESX host (1918).
vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz

Note: Executing this command will initiate an automatic reboot of the host after command completion.


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2042141

Monday, July 28, 2014

SW ,HW,OS Compatibility note

I am trying to gather links and resource in terms of checking Compatibility when design, deploy or upgrade...

The VMware Compatibility Guide shows the certification status of operating system releases for use as a Guest OS by the following VMware products:
• VMware ESXi/ESX Server 3.0 and later
• VMware Workstation 6.0 and later
• VMware Fusion 2.0 and later
• VMware ACE 2.0 and later
• VMware Server 2.0 and later
http://partnerweb.vmware.com/comp_guide2/pdf/VMware_GOS_Compatibility_Guide.pdf

VMware Compatibility Guide Portal:
VMware provides support only for the devices that are listed in this document

http://www.vmware.com/resources/compatibility/search.php


Correlating VMware products build numbers to update levels (1014508)

VMware vSphere and vCloud suite build numbers table

This table provides a list of all VMware vSphere and vCloud suite build numbers and release dates. The build numbers are based on full installations. Patching ESXi/ESX hosts increments the build number that shows in vCenter Server.
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1014508


Enhanced vMotion Compatibility (EVC) processor support (1003212)
For hosts in cluster with different version of CPU

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1003212




IBM servers OS support
http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/vmware.html