Your go-to destination for cutting-edge server products

MSB7700-ES2F Mellanox Infiniband 36x 100GB QSFP28 EDR B-F Air Switch

MSB7700-ES2F
* Product may have slight variations vs. image
Hover on image to enlarge

Brief Overview of MSB7700-ES2F

Mellanox MSB7700-ES2F Infiniband 36x 100GB QSFP28 EDR B-F Air Switch. Excellent Refurbished with 1 year Replacement Warranty

$1,217.70
$902.00
You save: $315.70 (26%)
Ask a question
Price in points: 902 points
+
Quote
SKU/MPNMSB7700-ES2FAvailability✅ In StockProcessing TimeUsually ships same day ManufacturerMELLANOX Manufacturer WarrantyNone Product/Item ConditionExcellent Refurbished ServerOrbit Replacement Warranty1 Year Warranty
Google Top Quality Store Customer Reviews
Our Advantages
Payment Options
  • — Visa, MasterCard, Discover, and Amex
  • — JCB, Diners Club, UnionPay
  • — PayPal, ACH/Bank Transfer (11% Off)
  • — Apple Pay, Amazon Pay, Google Pay
  • — Buy Now, Pay Later - Affirm, Afterpay
  • — GOV/EDU/Institutions PO's Accepted 
  • — Invoices
Delivery
  • — Deliver Anywhere
  • — Express Delivery in the USA and Worldwide
  • — Ship to -APO -FPO
  • For USA - Free Ground Shipping
  • — Worldwide - from $30
Description

Mellanox 36x 100GB QSFP28 B F Air Switch 

The Mellanox MSB7700-ES2F is a high-performance Infiniband switch designed for data-intensive environments. With 36x 100GB QSFP28 EDR ports, it delivers exceptional throughput and reliability for enterprise and HPC networks.

General Information

  • Brand: Mellanox
  • Model Number: MSB7700-ES2F
  • Device Type: Network Switch

Technical Specifications

Network Connectivity

  • 36x 100 Gigabit Ethernet QSFP28 interfaces
  • InfiniBand Extended Data Rate (EDR) with 64/66 encoding
  • Per lane signaling rate up to 25Gb/s

Management Interfaces

  • 2x 1 Gigabit RJ45 management ports
  • 1x RS232 console port
  • 1x USB interface

Performance Metrics

  • Switching capacity: 7 Tbps
  • Ultra-low latency: 90ns
  • Powered by dual-core x86 CPU

Cooling and Power

Airflow Design

  • Back-to-Front airflow (P2C) for optimized cooling

Fan Modules

  • 4x MTEF-FANF-A hot-swappable fan units included

Power Supply Units

  • 2x MTEF-PSF-AC-A AC power supplies included

MSB7700-ES2F InfiniBand 36x 100GB QSFP28 Switch Overview

The Mellanox MSB7700-ES2F InfiniBand 36x 100GB QSFP28 EDR B-F Air Switch is a high-performance data center interconnect platform engineered for modern high-performance computing, artificial intelligence workloads, cloud-scale infrastructure, and enterprise storage environments. Designed by :contentReference[oaicite:0]{index=0} and now part of :contentReference[oaicite:1]{index=1}, this InfiniBand EDR switch represents a critical component in building ultra-low-latency, high-bandwidth fabrics that meet the demands of today’s compute-intensive applications.

The MSB7700-ES2F model delivers 36 ports of 100Gb/s EDR InfiniBand connectivity via QSFP28 interfaces, offering exceptional throughput, deterministic performance, and hardware-accelerated networking features. Its B-F (back-to-front) airflow design ensures efficient cooling in data center rack deployments, aligning with hot aisle/cold aisle containment strategies commonly used in hyperscale and enterprise facilities.

InfiniBand EDR Architecture and Performance Capabilities

100Gb/s EDR Throughput for High-Performance Fabrics

The Mellanox MSB7700-ES2F switch supports EDR (Enhanced Data Rate) InfiniBand at 100Gb/s per port, delivering aggregate switching capacity that scales to meet the needs of dense HPC clusters and AI training environments. With 36 QSFP28 ports, this platform enables flexible topologies such as fat-tree, leaf-spine, torus, and dragonfly architectures, making it suitable for both small research clusters and large-scale production data centers.

EDR InfiniBand technology significantly reduces network bottlenecks by providing high message rates, ultra-low latency, and advanced congestion control mechanisms. These attributes are particularly important for distributed computing frameworks and parallel file systems that depend on fast node-to-node communication.

Low Latency and High Message Rate Processing

The MSB7700-ES2F is optimized for microsecond-level latency, ensuring efficient synchronization across compute nodes. In HPC simulations, financial modeling, genomics research, and AI training tasks, minimizing network latency directly impacts application performance and time-to-results.

InfiniBand’s RDMA (Remote Direct Memory Access) capabilities allow direct memory-to-memory data transfers between servers without CPU intervention, significantly reducing overhead. The switch hardware supports advanced routing and congestion management, helping to maintain consistent performance even under heavy load conditions.

Switching Capacity and Internal Fabric Design

Internally, the Mellanox MSB7700-ES2F switch incorporates a high-performance switching ASIC capable of handling full line-rate traffic across all ports simultaneously. This ensures non-blocking architecture and predictable throughput across large fabrics.

Each QSFP28 port supports EDR signaling, allowing for connectivity to InfiniBand adapters, storage targets, GPU-accelerated servers, and gateway appliances. The robust internal architecture supports efficient packet processing and optimized flow control, maintaining fabric stability and reliability across complex deployments.

36x QSFP28 Port Configuration and Connectivity Options

QSFP28 Interface Flexibility

The 36 QSFP28 ports on the Mellanox MSB7700-ES2F provide flexible connectivity options for both short-range and long-range interconnect scenarios. QSFP28 transceivers support direct attach copper (DAC) cables for short rack-level connections as well as active optical cables (AOC) and fiber modules for extended distances within data center rows or across facilities.

This flexibility allows data center architects to optimize costs while maintaining performance. DAC cables are ideal for top-of-rack to server connections, while optical modules are well suited for spine-layer interconnects or cross-room deployments.

Scalable Cluster Design

With 36 ports available, the MSB7700-ES2F can function as a leaf switch connecting directly to compute nodes or as part of a spine layer in a larger InfiniBand fabric. When deployed in multi-switch topologies, it supports large-scale clusters with thousands of nodes, maintaining consistent latency and bandwidth characteristics.

The switch enables linear scaling of compute resources, making it suitable for AI model training clusters, big data analytics environments, and scientific computing infrastructures where horizontal scalability is essential.

B-F Airflow Design for Data Center Efficiency

Back-to-Front Cooling Optimization

The B-F (back-to-front) airflow configuration of the Mellanox MSB7700-ES2F is engineered to align with modern data center cooling strategies. In a typical hot aisle/cold aisle setup, cool air enters from the front of the rack and warm air is exhausted toward the rear. The back-to-front airflow model ensures that airflow direction matches rack layout requirements, improving thermal efficiency and reducing cooling costs.

Efficient cooling is crucial in high-density HPC and AI environments, where multiple servers and switches generate significant heat loads. Proper airflow design helps maintain optimal operating temperatures and extends hardware lifespan.

Redundant Power and Fan Modules

The MSB7700-ES2F is designed with redundant, hot-swappable power supplies and fan modules to ensure high availability. This redundancy minimizes downtime during maintenance or component replacement and supports mission-critical operations in enterprise and research environments.

Hot-swappable components allow technicians to perform maintenance without powering down the switch, maintaining continuous network availability and preserving active compute jobs.

Advanced InfiniBand Features and Capabilities

RDMA Acceleration

RDMA is a foundational feature of InfiniBand networks, and the Mellanox MSB7700-ES2F fully supports hardware-based RDMA acceleration. By enabling direct data transfers between server memory spaces without CPU intervention, RDMA reduces latency and improves overall application throughput.

This capability is especially important for distributed machine learning frameworks, parallel databases, and real-time analytics workloads that rely on high-speed inter-node communication.

Adaptive Routing and Congestion Control

The switch includes advanced adaptive routing mechanisms that dynamically select optimal paths through the fabric, preventing congestion hotspots and ensuring consistent performance. Congestion control algorithms detect traffic buildup and redistribute flows efficiently, maintaining high throughput even during peak utilization.

These features are essential in AI clusters where traffic patterns can shift rapidly during model training cycles or data shuffling operations.

Quality of Service and Traffic Isolation

The Mellanox MSB7700-ES2F supports multiple virtual lanes and quality of service mechanisms, allowing administrators to prioritize critical traffic types. This ensures that latency-sensitive workloads such as real-time simulations or transactional analytics maintain consistent performance even when sharing the network with bulk data transfers.

Traffic isolation capabilities enhance security and operational stability, enabling multi-tenant HPC or cloud environments to operate efficiently on a shared fabric.

High-Performance Computing Applications

Scientific Research and Simulation

In research laboratories and academic institutions, HPC clusters powered by InfiniBand networks are used for complex simulations in physics, chemistry, and climate science. The Mellanox MSB7700-ES2F provides the low-latency interconnect required for tightly coupled applications such as MPI-based workloads.

By enabling rapid node-to-node communication, the switch reduces synchronization delays and improves parallel efficiency across large compute grids.

Artificial Intelligence and Deep Learning

AI training environments require massive data movement between GPU-accelerated servers. The high bandwidth and low latency of the MSB7700-ES2F make it an ideal interconnect for distributed deep learning clusters. When paired with GPU servers and InfiniBand adapters, the switch facilitates fast gradient synchronization and efficient scaling across multiple nodes.

This performance advantage accelerates model training cycles and reduces overall infrastructure costs by improving resource utilization.

Enterprise Storage and Database Acceleration

InfiniBand fabrics are widely used to interconnect high-performance storage arrays and database clusters. The Mellanox MSB7700-ES2F supports fast data replication, backup operations, and high-throughput transactional processing. RDMA capabilities reduce storage latency and enable efficient access to shared storage resources.

For enterprises managing large data sets, this switch enhances application responsiveness and improves overall data center performance.

Integration with InfiniBand Adapters

The switch integrates seamlessly with Mellanox ConnectX InfiniBand host channel adapters, forming a cohesive fabric solution. When deployed with compatible adapters, administrators can leverage unified management tools and advanced telemetry features for monitoring and troubleshooting.

This integration simplifies deployment and ensures consistent firmware and feature compatibility across the network stack.

Features
Manufacturer Warranty:
None
Product/Item Condition:
Excellent Refurbished
ServerOrbit Replacement Warranty:
1 Year Warranty