900-9X6AF-0016-ST1 Nvidia ConnectX-6 InfiniBand Card 100GBPS 1Port QSFP56 PCIe3.0/4.0 x16 Ethernet Adapter
- — Free Ground Shipping
- — Min. 6-month Replacement Warranty
- — Genuine/Authentic Products
- — Easy Return and Exchange
- — Different Payment Methods
- — Best Price
- — We Guarantee Price Matching
- — Tax-Exempt Facilities
- — 24/7 Live Chat, Phone Support
- — Visa, MasterCard, Discover, and Amex
- — JCB, Diners Club, UnionPay
- — PayPal, ACH/Bank Transfer (11% Off)
- — Apple Pay, Amazon Pay, Google Pay
- — Buy Now, Pay Later - Affirm, Afterpay
- — GOV/EDU/Institutions PO's Accepted
- — Invoices
- — Deliver Anywhere
- — Express Delivery in the USA and Worldwide
- — Ship to -APO -FPO
- — For USA - Free Ground Shipping
- — Worldwide - from $30
Nvidia 900-9X6AF-0016-ST1 ConnectX-6 InfiniBand / Ethernet Adapter
The Nvidia 900-9X6AF-0016-ST1 ConnectX-6 Adapter Card is a high-performance network interface solution that enables both InfiniBand and Ethernet connectivity at speeds up to 100Gb/s. Designed with PCI Express 3.0/4.0 x16 support and a single QSFP56 port, this card delivers ultra-low latency, exceptional bandwidth, and efficiency for demanding data centers, cloud infrastructures, and high-performance computing clusters.
General Features and Highlights
- Manufacturer: Nvidia
- Part Number: 900-9X6AF-0016-ST1
- Network Protocols: InfiniBand HDR100, EDR, FDR, QDR, DDR, SDR & Ethernet 10/25/40/50/100 Gb/s
- Interface: PCIe Gen3/Gen4 x16 lanes
- Connector Type: QSFP56 (supporting both optical and copper cables)
- Form Factor: Compact, efficient card with dimensions 6.6” x 2.71” (167.65mm x 68.90mm)
Key of Specifications
- Interface: PCIe Gen3/4 x16
- Connector: QSFP56 supporting both copper and optical
- Protocols: InfiniBand (SDR/HDR100) & Ethernet (up to 100Gb/s)
- Dimensions: 6.6” x 2.71”
- Power Consumption: Typical 15.6W
- Operational Temp: 0°C to 55°C
- Max Altitude: 3050m
Physical Dimensions
- Length: 6.6 inches (167.65mm)
- Height: 2.71 inches (68.90mm)
- Connector: Single QSFP56 supporting InfiniBand and Ethernet
InfiniBand Compatibility
- 1x/2x/4x SDR (2.5Gb/s per lane)
- DDR (5Gb/s per lane)
- QDR (10Gb/s per lane)
- FDR10 (10.3125Gb/s per lane)
- FDR (14.0625Gb/s per lane)
- EDR (25Gb/s per lane)
- HDR100 (2 lanes x 50Gb/s each)
Ethernet Standards
- 100GbE: 100GBASE-CR4, CR2, KR4, SR4
- 50GbE: 50GBASE-R2, R4
- 40GbE: 40GBASE-CR4, KR4, SR4, LR4, ER4, R2
- 25GbE: 25GBASE-R
- 20GbE: 20GBASE-KR2
- 10GbE: 10GBASE-LR, ER, CX4, CR, KR, SR
- 1GbE: 1000BASE-CX, 1000BASE-KX
- SGMII support for compatibility with legacy connections
Performance and Data Throughput
- InfiniBand bandwidth: SDR through HDR100 supported
- Ethernet bandwidth: 10/25/40/50/100 Gb/s
- PCI Express Gen3 and Gen4 compliance with 8.0 GT/s and 16 GT/s per lane
- x16 lane width for maximum throughput
Power Requirements
- Auxiliary Voltage: 3.3V
- Maximum Current: 100mA
- Typical Power with passive cables: 15.6W
- Maximum QSFP56 port power delivery: 5W
- Refer to Nvidia ConnectX-6 VPI specifications for higher load configurations
Cooling and Airflow
- Passive cables: 300 LFM at 55°C (heatsink to port), 200 LFM at 35°C (port to heatsink)
- Nvidia active 4.7W cables: 300 LFM at 55°C (heatsink to port), 200 LFM at 35°C (port to heatsink)
Environmental Tolerance
Temperature Ranges
- Operational: 0°C to 55°C
- Storage / Non-operational: -40°C to 70°C
Humidity Levels
- Operational: 10% to 85% relative humidity
- Non-operational: 10% to 90% relative humidity
Key Advantages of Nvidia 900-9X6AF-0016-ST1
- Dual support for InfiniBand and Ethernet with advanced HDR and 100GbE connectivity
- Optimized for modern high-performance servers and enterprise-grade applications
- Efficient cooling architecture for sustainable workloads
- Broad compatibility with multiple Ethernet and InfiniBand standards
- Compact yet durable design ensuring reliability in demanding IT infrastructures
Product overview 900-9X6AF-0016-ST1 NVIDIA ConnectX-6 InfiniBand Adapter
The 900-9X6AF-0016-ST1 is an OEM ordering part number for an NVIDIA ConnectX-6 family adapter — a high-performance, single-port QSFP56 InfiniBand and 100GbE network card designed for demanding data center workloads. Built to deliver ultra-low latency, advanced offloads and robust protocol support, this PCIe x16 form factor card targets HPC clusters, AI training/inference farms, storage arrays and hyperscale cloud deployments that require deterministic latency, high throughput and flexible deployment options.
Performance and throughput considerations
One port of QSFP56 on the ConnectX-6 hardware supports up to 100Gb/s (HDR100/EDR IB or 100GbE) per the card’s specification; when installed in a PCIe Gen4 x16 slot some ConnectX-6 variants can be used to drive higher aggregated host throughput, depending on firmware and SKU. The card’s SerDes and link layer support both PAM4 and NRZ signaling modes to interoperate with modern switches and transceivers. The result is a predictable, high-bandwidth fabric for large data sets and dense inter-node communications.
Latency and determinism
For applications such as distributed training, MPI, in-memory databases and high-performance storage, sub-microsecond latencies are critical. ConnectX-6 devices are engineered with RDMA and transport optimizations to deliver consistent low latency; vendor datasheets and product pages report typical messaging latencies below 0.6 μs under optimal conditions. This determinism is a key advantage over traditional kernel-based TCP stacks in latency-sensitive clusters.
Deep dive hardware, form factor and physical interfaces
Card dimensions and bracket options
The 900-9X6AF-0016-ST1 ConnectX-6 card follows NVIDIA’s half-height, half-length PCIe adapter profile. The compact footprint (approximately 167.65mm × 68.90mm) allows dense server populations while still providing full x16 electrical connectivity. Some vendor variants ship with tall brackets for compatibility with larger chassis. Always verify the bracket type and server slot clearance before installing.
QSFP56 flexible cabling and optics
The single QSFP56 port supports both optical transceivers and direct-attach copper breakouts, giving architects choices between long-reach fiber, short-reach DAC and active optical cables. QSFP56 modules and cables are widely available for 100GbE and InfiniBand HDR100/EDR interconnects — choose the right transceiver to match your switch/router optics and the length of links in your rack or between racks.
InfiniBand VPI and Ethernet modes
ConnectX-6 adapters are often VPI (Virtual Protocol Interconnect) capable, meaning a single hardware SKU can operate either as InfiniBand or as RoCE/100GbE Ethernet depending on firmware and configuration. The 900-9X6AF-0016-ST1 OPN is listed among NVIDIA’s InfiniBand/Ethernet adapter ordering part numbers and firmware compatibility matrices, so customers can plan hardware inventories that cover multiple protocol needs. Firmware versions and specific OPNs determine exact supported protocols and speeds — check NVIDIA’s firmware compatibility documentation when building mixed environments.
Firmware, driver and software ecosystem
NVIDIA provides user manuals, firmware bundles and the Mellanox OFED drivers for ConnectX-6 adapters. Production deployments require matching firmware and driver versions to ensure features like SR-IOV, RoCEv2, NVMe-oF offloads and vSwitch acceleration operate correctly. NVIDIA’s documentation and release notes list compatible firmware packages for specific OPNs (including the 900-9X6AF family), and the vendor maintains long-term support cycles for data center customers.
Recommended driver tools & utilities
- Mellanox / NVIDIA OFED drivers for Linux (kernel integration and RDMA stacks).
- nvidia-fabricmanager or vendor utilities for monitoring and firmware updates.
- ibstat, ibv_devinfo and ethtool for link diagnostics and tune-time checks.
Advanced features and offloads
RDMA, GPU Direct and NVMe-oF
ConnectX-6 supports full RDMA semantics for both InfiniBand and RoCE, enabling direct zero-copy transfers between remote memory spaces. When paired with NVIDIA GPUs and GPU Direct RDMA, the adapter can move data between GPU memory and remote nodes without staging buffers in host memory — a significant performance advantage for multi-GPU training and tightly coupled MPI jobs. Additionally, ConnectX-6 offloads NVMe-over-Fabric (NVMe-oF) functions to reduce CPU overhead and accelerate storage access for disaggregated storage architectures.
Security, encryption and storage offloads
Hardware-accelerated crypto engines and block-level encryption offloads allow the adapter to perform inline encryption without major host CPU impact. ConnectX-6 also includes checksum offloads and storage acceleration features that benefit block and object storage systems, including inline integrity checks and low-latency replication streams. These features are especially useful in compliance-sensitive environments or multi-tenant clouds where secure data movement matters.
Virtualization and multi-tenant operations
Support for SR-IOV, hardware vSwitch/vRouter acceleration and programmable pipelines make the ConnectX-6 well suited to NFV, containerized networking and multi-tenant workloads. Offloading virtual switching improves throughput for East-West traffic inside hypervisor hosts and reduces noisy-neighbor effects by keeping packet processing in hardware. This yields better overall host consolidation ratios and more predictable performance for tenant workloads.
Common deployment scenarios and industry use cases
High Performance Computing (HPC)
HPC clusters—running MPI, distributed scientific simulations or weather modeling—benefit from InfiniBand’s low latency and efficient collective operations. The 900-9X6AF-0016-ST1’s HDR100/EDR support and RDMA acceleration reduce synchronization overhead for tightly coupled jobs and deliver measurable speedups in time-to-solution. Architects commonly select ConnectX-6 adapters for interconnect fabrics in compute nodes and for accelerator-dense servers.
AI/ML training and inference
Large model training demands fast all-reduce operations and GPU-GPU communication across nodes. ConnectX-6 with GPU Direct RDMA enables efficient inter-GPU transfers across racks, lowering host CPU involvement and preserving memory bandwidth for model compute. These advantages translate to higher parallel efficiency and reduced epoch times in distributed training pipelines.
Storage networks and NVMe-oF
When used as a target or initiator for NVMe-oF, adapters in the ConnectX family accelerate storage traffic and provide features such as inline encryption and data integrity checks. Storage arrays and software-defined storage systems that need low latency and high IOPS commonly adopt InfiniBand or RoCE fabrics based on ConnectX hardware to meet SLAs.
Cloud & virtualization (NFV)
Service providers and private clouds deploy ConnectX-6 adapters for their combination of throughput, virtualization offloads and programmable pipelines. This allows carrier-grade VNFs and tenant networks to run with closer to bare-metal performance and predictable networking behavior.
Checklist notes
A mismatch between a card’s firmware and the data center’s management software can limit features or cause link negotiation issues. Always test firmware updates in staging and track the vendor’s compatibility notes for specific OPNs such as 900-9X6AF-0016-ST1.
Network stack tuning
For optimal RDMA and RoCEv2 performance, you may need to tune host kernel parameters (e.g., increasing shared memory, adjusting IRQ affinity), queue pair counts and buffer sizes. NVIDIA documentation and community tuning guides provide recommended starting points for clusters running MPI or NVMe-oF.
900-9X6AF-0016-ST1 vs other ConnectX SKUs
The 900-9X6AF-0016-ST1 is a single-port QSFP56 variant aimed at 100Gb/s InfiniBand/Ethernet workloads. Other ConnectX-6 family SKUs may offer dual QSFP56 ports, 200Gb/s support per port (for HDR/200GbE variants), or EN vs VPI flavored firmware that changes protocol behavior. When selecting between SKUs, weigh the need for dual ports, whether you require 200GbE, and whether your fabric is InfiniBand or Ethernet/RoCE based. Firmware compatibility matrices from NVIDIA list exact OPN mappings and should be consulted for precise SKU features.
Common tradeoffs
- Single vs dual port: Single-port cards can be more cost-effective for smaller nodes; dual-port variants increase redundancy or allow fabric segmentation.
- 100Gb/s vs 200Gb/s: If your topology anticipates future 200GbE adoption, choose SKUs and switch optics that support higher PAM4 speeds.
- EN vs VPI: EN (Ethernet) SKUs are Ethernet-centric, while VPI SKUs can switch between InfiniBand and Ethernet depending on firmware and license.
This card be used for both InfiniBand and Ethernet
Many ConnectX-6 variants are VPI-capable and can operate in either InfiniBand or Ethernet modes depending on firmware. The 900-9X6AF family includes entries in NVIDIA’s firmware compatibility lists showing InfiniBand/EN assignments — confirm the exact OPN and firmware build for your SKU.
Where to get official help
If you need hands-on assistance for firmware upgrades, driver installs or performance tuning, contact your hardware vendor’s enterprise support line or the NVIDIA networking support channels. They can provide compatibility checks for your specific server model, recommended firmware bundles and best-practice configuration scripts for Linux distributions and orchestration systems.
Technical note
The 900-9X6AF-0016-ST1 represents a turnkey, high-performance adapter option for teams building low-latency fabrics or accelerating storage and AI workflows. Its combination of hardware offloads, QSFP56 flexibility and PCIe x16 compatibility makes it a practical choice when predictable latency, high throughput and flexible protocol support are required. For exact feature availability and firmware-level capabilities, always consult NVIDIA’s ConnectX-6 documentation and the firmware compatibility tables referenced above.
