Tuesday, November 26, 2013

Infiniband note ,lab and reference


have read the great article and experience sharing for an Infiniband Simple Lab architecture from Vladan's blog  here

some highlight for the article:
  • implement 20GBs Infiniband (IB) connection with a little cost
  • support Vsphere  5.x but need use non-official driver thus it should not be used in PROD environment.
  •  It support windows server up to 2008R2 so cannot use the advanced features of  hyper-v 2012R2 server ><
  • could test the RDMA Remote direct memory access performance
  • difficult to scale out as you may need a expensive Infiniband switch 
ref:
Infiniband @ home: your homelab to 20Gbps   http://www.hypervisor.fr/?p=4662


Are InfiniBand's Days Numbered?

Mellanox ConnectX-3-based InfiniBand and Ethernet Adapters for IBM System x


igure 2 shows the Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter (the shipping adapter includes a heatsink over the ASIC but the figure does not show this heatsink)

Figure 2. Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter for IBM System x with 3U bracket (required ASIC heatsink not shown)


Features

The Mellanox Connect X-3 10GbE Adapter has the following features:
  • Two 10 Gigabit Ethernet ports
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • SR-IOV (16 Virtual Functions) supported by KVM
  • Enables Low Latency RDMA over Ethernet (supported with both non-virtualized and SR-IOV enabled virtualized servers)
  • TCP/UDP/IP stateless offload in hardware
  • Traffic steering across multiple cores
  • Intelligent interrupt coalescence
  • Industry-leading throughput and latency performance
  • Software compatible with standard TCP/UDP/IP stacks
  • Legacy and UEFI PXE network boot support

The Mellanox ConnectX-3 40GbE / FDR IB VPI Adapter has the following features:
  • Dual QSFP ports supporting FDR-14 InfiniBand or 40 Gb Ethernet
  • Low-profile form factor adapter with 2U bracket (3U bracket available for CTO orders)
  • PCI Express 3.0 x8 host-interface (PCIe 2.0 and 1.1 compatible)
  • Support for InfiniBand FDR speeds of up to 56 Gbps (auto-negotiation FDR-10, DDR and SDR)
  • Support for Virtual Protocol Interconnect (VPI), which enables one adapter for both InfiniBand and 10/40 Gb Ethernet. Supports three configurations:
    • 2 ports InfiniBand
    • 2 ports Ethernet
    • 1 port InfiniBand and 1 port Ethernet
  • SR-IOV (16 Virtual Functions) supported by KVM (Ethernet mode only)
  • Enables Low Latency RDMA over 40Gb Ethernet (supported with both non-virtualized and SR-IOV enabled virtualed servers)
  • High performance/low-latency networking
  • Sub 1 µs InfiniBand MPI ping latency
  • Support for QSFP to SFP+ for 10 GbE support
  • Traffic steering across multiple cores
  • Legacy and UEFI PXE network boot support (Ethernet mode only)

4K Video Drives New Demands
Bandwidth and network ports required for Uncompressed 4K & 8K video
media_diagram


No comments:

Post a Comment