Your go-to destination for cutting-edge server products

900-9X6AG-0056-ST1 Nvidia ConnectX-6 Dx Card 2Port QSFP56 100GBE PCIe 4.0 x16 Ethernet Adapter

900-9X6AG-0056-ST1
* Product may have slight variations vs. image
Hover on image to enlarge

Brief Overview of 900-9X6AG-0056-ST1

Nvidia 900-9X6AG-0056-ST1 ConnectX-6 Dx Card 2Port QSFP56 100GBE PCIe 4.0 x16 Ethernet Adapter. Excellent Refurbished with 1 year replacement warranty

$1,213.65
$899.00
You save: $314.65 (26%)
Ask a question
Price in points: 899 points
+
Quote

Additional 7% discount at checkout

SKU/MPN900-9X6AG-0056-ST1Availability✅ In StockProcessing TimeUsually ships same day ManufacturerNvidia Manufacturer WarrantyNone Product/Item ConditionExcellent Refurbished ServerOrbit Replacement Warranty1 Year Warranty
Google Top Quality Store Customer Reviews
Our Advantages
Payment Options
  • — Visa, MasterCard, Discover, and Amex
  • — JCB, Diners Club, UnionPay
  • — PayPal, ACH/Bank Transfer (11% Off)
  • — Apple Pay, Amazon Pay, Google Pay
  • — Buy Now, Pay Later - Affirm, Afterpay
  • — GOV/EDU/Institutions PO's Accepted 
  • — Invoices
Delivery
  • — Deliver Anywhere
  • — Express Delivery in the USA and Worldwide
  • — Ship to -APO -FPO
  • For USA - Free Ground Shipping
  • — Worldwide - from $30

Same product also available in:

Description

Nvidia 900-9X6AG-0056-ST1 ConnectX-6 Dx Adapter

The Nvidia 900-9X6AG-0056-ST1 ConnectX-6 Dx Ethernet Adapter is a high-performance dual-port QSFP56 PCIe 4.0 x16 card, designed to meet the demands of modern data centers and enterprise environments. Engineered with efficiency, scalability, and reliability in mind, it 100GbE network speeds across copper and optical connections, offering advanced flexibility for varied networking infrastructures.

Manufacturer and Part Details

  • Manufacturer: Nvidia
  • Part Number: 900-9X6AG-0056-ST1
  • Model: ConnectX-6 Dx Dual-Port 100GbE PCIe Adapter
  • Crypto Capability: Disabled, Secure Boot Enabled

Physical Specifications

Dimensions and Layout

  • Size: 5.59 in. x 2.71 in. (142.00 mm x 68.90 mm)
  • Form Factor: Compact PCIe 4.0 x16 design

Connectivity

  • Interface: Dual QSFP56 Ethernet connectors
  • Compatibility: Works with both copper and optical cabling

Ethernet Speed Options

  • 100Gb/s: 100GBASE-CR2, CR4, KR4, SR4, KR2, SR2
  • 50Gb/s: 50GBASE-R2, R4
  • 40Gb/s: 40GBASE-CR4, KR4, SR4, LR4, ER4, R2
  • 25Gb/s: 25GBASE-R
  • 20Gb/s: 20GBASE-KR2
  • 10Gb/s: 10GBASE-LR, ER, CX4, CR, KR, SR
  • 1Gb/s: 1000BASE-CX, KX
  • Legacy Support: SGMII protocol

PCI Express Compatibility

Interface Details

  • Featuring PCI Express Gen 3.0 and 4.0 with 16 lanes, the adapter ensures high-speed SerDes at 16.0 GT/s, maintaining backward compatibility with PCIe 2.0 and 1.1 standards.

Power and Energy Efficiency

Power Consumption

  • Typical Power with Passive Cables: 18.7W (PCIe 3.0), 19.52W (PCIe 4.0)
  • Maximum Power with Passive Cables: 25.28W (PCIe 3.0), 26.64W (PCIe 4.0)
  • QSFP56 Port Power Capacity: 5W per port

Electrical Specifications

  • Voltage: 3.3V Auxiliary
  • Maximum Current: 100mA

Cooling and Airflow Requirements

  • Passive Cable Airflow: 550 LFM (Heatsink to Port, Hot Aisle)
  • Active 2.5W Cable Airflow: 700 LFM (Heatsink to Port, Hot Aisle)

Environmental Characteristics

Operating Conditions

  • Operational Temperature: 0°C to 55°C
  • Storage Temperature: -40°C to 70°C
  • Operating Humidity: 10% to 85% Relative Humidity
  • Non-operational Humidity: -10% to 90% Relative Humidity
  • Altitude (Operational): Up to 3050m

Compliance and Certification

  • Regulatory Status: RoHS Compliant
  • Environment-Friendly Design

Highlighted Benefits of Nvidia ConnectX-6 Dx 100GbE Adapter

Performance Advantages

  • Unmatched dual-port 100GbE connectivity for high-throughput computing
  • Scalable data rate support for evolving workloads
  • Reduced latency for real-time applications

Deployment Flexibility

  • Compatible with copper and optical cabling
  • Wide range of Ethernet protocol support
  • Backward-compatible PCIe interface ensures versatile integration

Reliability and Durability

  • Designed for robust operational ranges in data centers
  • Secure Boot enabled for enhanced system protection
  • RoHS compliance ensuring eco-friendly deployment

900-9X6AG-0056-ST1 Nvidia ConnectX-6 Dx Card

The 900-9X6AG-0056-ST1 Nvidia ConnectX-6 Dx family of adapters is a high-performance line of 100 Gigabit Ethernet (100GbE) network interface cards (NICs) engineered for modern data centers, cloud infrastructure, high-performance computing (HPC), and latency-sensitive AI/ML clusters. This category centers on the 2-port QSFP56 form factor with PCIe 4.0 x16 connectivity, delivering accelerated networking through advanced offloads, hardware-based telemetry, and multi-protocol support (Ethernet, RoCE, and more). Buyers looking for a robust, production-ready NIC will find the ConnectX-6 Dx adapters provide the reliability, feature set, and ecosystem compatibility required for large-scale deployments.

Core value proposition and keywords

Key search phrases for this category include: 900-9X6AG-0056-ST1, Nvidia ConnectX-6 Dx 2Port QSFP56, 100GBE PCIe 4.0 x16 Ethernet adapter, QSFP56 NIC, 100Gb NIC for servers, and RDMA NIC RoCE. These terms map to the product’s unique SKU, interface standards, and the primary benefits—extreme throughput, low latency, and offload capabilities. Using these phrases naturally in product descriptions ensures search engines index the page for both SKU-driven and capability-driven queries.

Technical highlights What defines the ConnectX-6 Dx 2Port

Hardware architecture and interface

The ConnectX-6 Dx cards in the 900-9X6AG-0056-ST1 category use a PCI Express Gen4 x16 host interface, enabling full line-rate performance across both QSFP56 ports while maintaining efficient CPU utilization. The QSFP56 port type supports native 100GbE using QSFP56 optics or breakout configurations (e.g., 4x25Gb lanes using breakout cables). Internally, the adapter integrates advanced packet processing engines and memory buffers designed to handle sustained, high-throughput workloads without introducing jitter.

Characteristics throughput, latency, and deterministic behavior

Throughput: saturating 100 Gigabit links

Expect the ConnectX-6 Dx 2-port QSFP56 adapters to sustain near line-rate throughput across standard packet sizes and mixed workloads. For bulk data transfers—like backup-to-disk or distributed storage replication—this card keeps CPU impact low while allowing the host to handle application-level processing. When configured correctly with proper link aggregation or RDMA verbs, the adapter handles multi-gigabit flows with predictable scaling.

Latency and jitter considerations

The hardware offloads and flow steering features reduce software interrupt pressure and context switching, yielding microsecond-class latencies for small messages and consistent tail latency for latency-sensitive services (e.g., key-value stores, distributed databases, and real-time analytics). For environments demanding deterministic latency, enabling RDMA and leveraging hardware timestamping optimizes performance further by removing TCP/IP stack overhead.

CPU offload and application acceleration

Offload 900-9X6AG-0056-ST1 capabilities—such as TCP Offload Engine (TOE), Large Segment Offload (LSO), and Receive Side Scaling (RSS)—are implemented to lower CPU cycles per packet. NVMe-oF (NVMe over Fabrics) and RDMA support make this NIC suitable for storage acceleration, while DPDK compatibility enables custom user-space packet processing for telecom, NFV, or packet capture workloads.

Form factor, power, and thermal profile

Physical considerations

The adapter is typically a full-height, dual-slot PCIe card requiring careful planning for server chassis compatibility. Due to the dual QSFP56 ports and on-board silicon, ensure adequate clearance and cooling in dense rack servers. These cards are common in 1U–2U servers with optimized airflow front-to-back, and many vendors provide pass-through or cable management accessories for QSFP transceivers and active cables.

Power consumption

Power draw varies with traffic and feature usage (e.g., when enabling full hardware offloads or active RoCE). Typical operational power ranges should be checked on the vendor’s datasheet for the precise SKU 900-9X6AG-0056-ST1, but administrators should provision PSU headroom and rack cooling accordingly to avoid thermal throttling during peak throughput scenarios.

Compatibility and ecosystem

Transceivers, DACs, and AOCs

The QSFP56 interface supports a variety of physical media. QSFP56 SR/LR optical transceivers for multimode and single-mode fiber links. Active Optical Cables (AOC) for extended reach without separate transceivers. Direct Attach Copper (DAC) passive and active copper cables for short-reach, cost-effective connections inside racks or between adjacent racks. Breakout cables enabling 1×100G to 4×25G or 2×50G lane splits depending on switch capabilities. Ensure transceiver vendor compatibility with the NVIDIA firmware and switch side to avoid link negotiation issues. Many deployments use OEM-validated optics to ensure stable operation and full diagnosis via SFF-DD/QSFP diagnostics.

Switch and server compatibility

The ConnectX-6 Dx cards integrate well with leading data center switch vendors and server OEMs. Look for official interoperability notes or HCL (Hardware Compatibility Lists) from both the NIC vendor and your server/switch manufacturer. Compatibility includes support for link speeds, breakouts, and advanced features like PFC (Priority Flow Control) when implementing RoCE.

Deployment scenarios and real-world use cases

Cloud and virtualized infrastructures

In cloud and virtualization stacks, ConnectX-6 Dx cards deliver SR-IOV and NVGRE/VXLAN offloads to increase VM density and reduce CPU overhead. They are frequently used in multi-tenant environments where isolation and predictable performance are essential. Integration with hypervisors (KVM, VMware) and container networking (CNI plugins optimized for DPDK) enables fast packet paths for tenant workloads.

High-performance computing (HPC) and AI clusters

HPC and AI workloads benefit from RDMA and GPU Direct technologies supported by Nvidia networking adapters. RDMA removes kernel overhead and enables direct memory access between hosts or between host and GPU memory, lowering latency for distributed training and large-scale model synchronization. This makes the ConnectX-6 Dx an excellent choice for GPU-dense nodes where network bottlenecks can hinder scaling efficiency.

Storage and NVMe over Fabrics

The NIC’s NVMe-oF 900-9X6AG-0056-ST1 empowers servers to present or consume remote NVMe namespaces with minimal CPU overhead, unlocking disaggregated storage architectures. For storage arrays, the 100GbE interface reduces headroom constraints and allows larger data movements in less time—critical for backup windows, replication, and centralized storage fabrics.

Bulk procurement and refresh cycles

For rack-scale or pod-based procurement, standardize on a single SKU to simplify management and spare parts. Consider firmware parity across your estate to avoid heterogeneous behavior. When planning refresh cycles, evaluate whether PCIe 4.0 x16 slots are available or whether you’ll need to plan for PCIe 3.0 compatibility (with reduced host-side bandwidth but continued link-level functionality).

Driver and firmware best practices

Keep firmware and driver versions in sync with your OS and hypervisor to ensure feature availability and stability. For critical systems, perform controlled rollouts of firmware updates in staging environments. Utilize vendor-supplied utilities to validate firmware integrity and to rollback if necessary. Note that some advanced features (e.g., DPDK optimizations or NVMe-oF) may require specific driver releases.

Features
Manufacturer Warranty:
None
Product/Item Condition:
Excellent Refurbished
ServerOrbit Replacement Warranty:
1 Year Warranty