Your go-to destination for cutting-edge server products

900-9X6AG-0086-ST0 Nvidia ConnectX-6 Dx card PCIe 4.0 x16 QSFP56 100GBE Dual Port Network Adapter

900-9X6AG-0086-ST0
* Product may have slight variations vs. image
Hover on image to enlarge

Brief Overview of 900-9X6AG-0086-ST0

Nvidia 900-9X6AG-0086-ST0 ConnectX-6 Dx card PCIe 4.0 x16 QSFP56 100GBE Dual Port Network Adapter. Factory-Sealed New in Original Box (FSB) with 3 Years Warranty

$1,410.75
$1,045.00
You save: $365.75 (26%)
Ask a question
Price in points: 1045 points
+
Quote

Additional 7% discount at checkout

SKU/MPN900-9X6AG-0086-ST0Availability✅ In StockProcessing TimeUsually ships same day ManufacturerNvidia Manufacturer Warranty3 Years Warranty from Original Brand Product/Item ConditionFactory-Sealed New in Original Box (FSB) ServerOrbit Replacement Warranty1 Year Warranty
Google Top Quality Store Customer Reviews
Our Advantages
Payment Options
  • — Visa, MasterCard, Discover, and Amex
  • — JCB, Diners Club, UnionPay
  • — PayPal, ACH/Bank Transfer (11% Off)
  • — Apple Pay, Amazon Pay, Google Pay
  • — Buy Now, Pay Later - Affirm, Afterpay
  • — GOV/EDU/Institutions PO's Accepted 
  • — Invoices
Delivery
  • — Deliver Anywhere
  • — Express Delivery in the USA and Worldwide
  • — Ship to -APO -FPO
  • For USA - Free Ground Shipping
  • — Worldwide - from $30
Description

General Information about Nvidia 900-9X6AG-0086-ST0

The Nvidia 900-9X6AG-0086-ST0 ConnectX-6 Dx Ethernet Adapter Card is engineered to deliver superior performance for modern data centers. Supporting dual-port QSFP56 interfaces, advanced security features such as crypto acceleration and secure boot, and PCIe 4.0 x16 bandwidth, it is optimized for environments demanding low latency, high throughput, and secure communication.

Overview of Specifications

  • Manufacturer: Nvidia
  • Part Number: 900-9X6AG-0086-ST0
  • Type: Dual-Port 100GbE Adapter
  • Interface: PCI Express 4.0 x16
  • Bracket: Full Height

Physical Specifications

Form Factor and Build

  • Dimensions: 5.59 in x 2.71 in (142.00mm x 68.90mm)
  • Connectivity: Dual QSFP56 (supports copper and optical cables)
  • Bracket: Full height

Data Rate Options

  • Ethernet rates: 40/50/100 Gb/s
  • PCI Express: Gen 3.0/4.0, 16 lanes, 16.0 GT/s

Ethernet Compatibility

  • This adapter ensures seamless interoperability across a wide range of Ethernet standards, ensuring reliability in diverse networking environments.
  • 100GBASE-CR2, 100GBASE-CR4, 100GBASE-KR4, 100GBASE-SR4
  • 50GBASE-R2, 50GBASE-R4
  • 40GBASE-CR4, 40GBASE-KR4, 40GBASE-SR4, 40GBASE-LR4, 40GBASE-ER4, 40GBASE-R2
  • 25GBASE-R, 20GBASE-KR2
  • 10GBASE-LR, 10GBASE-ER, 10GBASE-CX4, 10GBASE-CR, 10GBASE-KR, 10GBASE-SR
  • SGMII, 1000BASE-CX, 1000BASE-KX

Performance Capabilities

  • To protect data integrity and system reliability, the Nvidia ConnectX-6 Dx incorporates cutting-edge encryption acceleration and secure boot technology. These features provide an additional layer of trust in sensitive enterprise environments.
  • Crypto-enabled processing
  • Secure Boot supported

Typical and Maximum Power

  • Energy efficiency is a key highlight of this adapter, offering optimized power usage across both PCIe Gen 3.0 and PCIe Gen 4.0 environments.
  • Typical Power (Passive Cables): 18.7W (Gen 3.0) / 19.52W (Gen 4.0)
  • Maximum Power (Passive Cables): 25.28W (Gen 3.0) / 26.64W (Gen 4.0)
  • Power via QSFP56 port: up to 5W each port
  • Voltage: 3.3V AUX
  • Max Current: 100mA

Cooling and Airflow

  • Passive Cable Cooling: 550 LFM (hot aisle – heatsink to port)
  • Active 2.5W Cable Cooling: 700 LFM (hot aisle – heatsink to port)

Environmental Conditions

Temperature Range

  • Operational: 0°C to 55°C
  • Storage: -40°C to 70°C

Humidity Tolerance

  • Operational: 10% – 85% relative humidity
  • Non-operational: -10% – 90% relative humidity

Altitude Endurance

  • Designed for performance in various conditions, the adapter can operate effectively at altitudes up to 3050 meters.

Regulatory Compliance

  • The Nvidia 900-9X6AG-0086-ST0 meets RoHS standards, ensuring environmentally friendly production and operation.

Key Highlights at a Glance

  • Dual-port QSFP56 with copper and optical
  • Speeds up to 100GbE with multiple backward-compatible standards
  • Advanced cryptographic and secure boot capabilities
  • Optimized power efficiency across PCIe Gen 3.0 and 4.0
  • Comprehensive environmental resilience (temperature, humidity, altitude)
  • RoHS compliant for sustainable technology deployment

900-9X6AG-0086-ST0 Nvidia Overview of Category

The 900-9X6AG-0086-ST0 Nvidia ConnectX-6 Dx family represents a high-performance class of data center network adapters engineered for demanding enterprise, cloud and HPC environments. These PCIe 4.0 x16 network interface cards (NICs) with QSFP56 connectivity deliver 100GbE dual-port bandwidth, ultra-low latency, and advanced offloads for CPU efficiency. This category encompasses adapters designed to accelerate networking, storage, virtualization, and security workloads while providing the resilience, programmability, and telemetry modern infrastructures require.

Technical Specifications and Architecture

PCIe interface and card form factor

The ConnectX-6 Dx series supports PCI Express 4.0 x16, enabling sustained line-rate throughput across both ports when installed in compatible servers. The x16 mechanical and electrical interface is essential for minimizing host bottlenecks when running multiple high-throughput data streams, NVMe-over-Fabrics, or virtualized network stacks. Form factors vary by vendor but typically conform to full-height, half-length add-in cards suitable for mainstream and dense server chassis.

Programmability and Smart NIC features

The architecture permits flexible programmability through P4, eBPF, or vendor SDKs, enabling use as a SmartNIC for in-network compute tasks like NAT, firewalling, telemetry sampling, or custom packet manipulation. This makes the category attractive for organizations pursuing disaggregated cloud-native architectures and edge deployments where offloading complex operations from the host CPU is desirable.

Memory, buffers and QoS

Large on-chip buffering and advanced Quality of Service (QoS) controls support jitter-sensitive applications, multi-tenant isolation, and traffic engineering. Administrators can configure multiple hardware queues, rate-limiting, and priority mapping to guarantee SLAs for latency-critical flows such as remote direct memory access (RDMA) or financial trading platforms.

Scalability and multi-host deployments

The adapter's hardware and software stack facilitate scaling across hyper converged systems, fabrics, and storage clusters. Features like SR-IOV, NVGRE, and VXLAN offloads reduce per-VM networking cost and help

Management tools and orchestration

Administrators can leverage vendor management utilities, RESTful APIs, SNMP, and telemetry frameworks for monitoring and updates. Integration with orchestration platforms such as Kubernetes, OpenStack, and VMware is common, particularly when combined with CNI plugins that offload networking functions to the NIC.

Firmware, security updates and lifecycle

Firmware continuity is critical for security and performance. The ConnectX-6 Dx family typically supports secure firmware update mechanisms and rollback capabilities. Vendors publish guidance for staged upgrades and compatibility matrices to avoid downtime in production clusters.

Use Cases and Workloads

High-performance computing (HPC)

HPC clusters benefit from the low-latency, high-throughput characteristics of ConnectX-6 Dx adapters. Use cases include MPI-based scientific simulations, parallel databases, and GPU-accelerated workloads where the NIC helps reduce communication overhead between compute nodes and GPUs.

Cloud and hyperscale data centers

In cloud environments, these adapters underpin virtual networking, tenant isolation, and bursty multi-tenant traffic patterns. Built-in acceleration for virtualization and SR-IOV helps improve VM density and reduce CPU consumption for networking tasks.

Storage and NVMe-over-Fabrics (NVMe-oF)

The combination of RDMA and NVMe-oF offloads reduces latency for distributed storage arrays and remote block storage. The ConnectX-6 Dx line is commonly deployed in storage nodes and as front-end adapters for software-defined storage systems.

Financial services and real-time analytics

Low-jitter, deterministic performance is crucial in trading platforms and real-time analytics. Hardware timestamping, packet prioritization, and predictable latency profiles make this category attractive for latency-sensitive finance workloads.

Comparisons, Variants, and Related Models

ConnectX-6 Dx vs. ConnectX-5 and other generations

Compared to previous generations, ConnectX-6 Dx improves PCIe bandwidth utilization (PCIe 4.0), packet processing capacity, and programmability. These generational differences manifest in more advanced offloads, higher port speeds (QSFP56 support), and better integration with modern software stacks.

Dual-port vs. single-port trade-offs

Dual-port adapters like the 900-9X6AG-0086-ST0 provide redundancy, link aggregation options, and higher aggregate throughput for multi-tenant or multi-path topologies. Single-port variants may be preferable for cost-sensitive or space-constrained deployments where only one 100GbE link is required.

Compatibility and vendor-specific SKUs

Many OEMs rebrand or modify base ConnectX-6 Dx designs for server compatibility, adding vendor-specific heatsinks, firmware, or validation testing. Verify the SKU (e.g., 900-9X6AG-0086-ST0) against server vendor compatibility lists and firmware release notes to avoid interoperability issues.

Secure boot and firmware integrity

Maintain firmware hygiene by deploying signed firmware, tracking CVEs, and applying patches during scheduled maintenance windows. Establish rollback procedures in case of firmware-related regressions.

Network segmentation and tenant isolation

Use VLAN tagging, VXLAN, NVGRE, and SR-IOV to separate tenant traffic and enforce compliance boundaries. The hardware's support for multiple traffic classes and QoS policies simplifies multi-tenant enforcement at line speeds.

Telemetry, logging and observability

Modern ConnectX-6 Dx adapters expose rich telemetry, including port-level counters, per-queue statistics, and hardware timestamps. Collect these metrics using Prometheus exporters, SNMP, or vendor-provided agents to identify congestion, packet drops, or performance anomalies early.

End-of-life and upgrade paths

Monitor vendor lifecycle announcements and plan hardware refresh cycles. When planning upgrades, assess whether newer network interface generations (e.g., PCIe 5.0-capable NICs) justify migration based on expected workload growth and feature benefits.

Related Accessories and Complementary Products

Server and chassis considerations

Verify that server backplanes provide adequate PCIe lane mapping and that chassis airflow supports the thermal envelope of a high-performance NIC. In dense rack deployments, evaluate blower-style cooling or front-to-back airflow patterns endorsed by the server manufacturer.

Switches and fabric hardware

Ensure top-of-rack and spine switches support 100GbE QSFP56 ports and the intended features (RoCEv2, PFC, ECN) if deploying RDMA or storage fabrics. Compatibility matrices between NICs and switch firmware can reduce troubleshooting time.

Reliability, MTBF and expected lifetime

Enterprise-grade NICs are rated for extended duty cycles; check vendor MTBF figures and recommended operational lifetimes. Regular firmware updates and thermal monitoring extend useful life and reduce risk of field failures.

Regulatory, Compliance and Certification

Safety and emission standards

Network adapters for data centers commonly comply with CE, FCC, RoHS, and other regional safety and environmental directives. Confirm certifications relevant to your deployment geography.

Interoperability and tested topologies

Many vendors publish validated architectures that pair NICs with particular switches, storage arrays, and orchestration layers. Use these validated topologies where possible to reduce integration effort and increase predictability.

Extended Technical Deep-Dive (for engineers and architects)

Advanced offload scenarios

In clustered AI training or distributed databases, combining NVMe oF with RDMA offloads can dramatically reduce host CPU interference and improve effective I/O throughput. Use cases include parameter server architectures, distributed checkpointing, and low-latency database replication.

Final technical note for catalog editors

When authoring individual product pages under this category, maintain specification parity and update the SKU details (900-9X6AG-0086-ST0) in title tags, canonical URLs, and product metadata. Ensure that the card’s firmware version, validated OS list, and recommended transceivers are visible in the product detail panel to reduce pre-sales support load and returns.

Features
Manufacturer Warranty:
3 Years Warranty from Original Brand
Product/Item Condition:
Factory-Sealed New in Original Box (FSB)
ServerOrbit Replacement Warranty:
1 Year Warranty