Your go-to destination for cutting-edge server products

900-2H400-0300-031 Nvidia Tesla P100 16GB PCI-E GPU Accelerator

900-2H400-0300-031
* Product may have slight variations vs. image
Hover on image to enlarge

Brief Overview of 900-2H400-0300-031

Nvidia 900-2H400-0300-031 Tesla P100 HBM2 16GB PCI-E GPU Accelerator. Excellent Refurbished with 1 year replacement warranty. HPE Version.

$1,073.25
$795.00
You save: $278.25 (26%)
Ask a question
Price in points: 795 points
+
Quote
SKU/MPN900-2H400-0300-031Availability✅ In StockProcessing TimeUsually ships same day ManufacturerNvidia Manufacturer WarrantyNone Product/Item ConditionExcellent Refurbished ServerOrbit Replacement Warranty1 Year Warranty
Google Top Quality Store Customer Reviews
Our Advantages
Payment Options
  • — Visa, MasterCard, Discover, and Amex
  • — JCB, Diners Club, UnionPay
  • — PayPal, ACH/Bank Transfer (11% Off)
  • — Apple Pay, Amazon Pay, Google Pay
  • — Buy Now, Pay Later - Affirm, Afterpay
  • — GOV/EDU/Institutions PO's Accepted 
  • — Invoices
Delivery
  • — Deliver Anywhere
  • — Express Delivery in the USA and Worldwide
  • — Ship to -APO -FPO
  • For USA - Free Ground Shipping
  • — Worldwide - from $30
Description

Overview of NVIDIA 16GB HBM2 GPU Accelerator 

The Nvidia Tesla P100 16GB PCIe GPU Accelerator (Part Number: 900-2H400-0300-031) represents a high-performance graphics processing solution built for advanced computing, AI training, and scientific workloads. Designed with groundbreaking Pascal architecture, it delivers exceptional speed, energy efficiency, and reliability for enterprise-grade applications. With its integrated 16GB HBM2 memory and PCI Express connectivity, this accelerator ensures optimal throughput and seamless scalability for data centers and research environments.

General Information

  • Brand: Nvidia
  • Part Number: 900-2H400-0300-031
  • Product Type: 16GB PCIe GPU Accelerator
  • Architecture: Pascal-based design for high computational density

Technical Specifications

  • Product Model: Nvidia 900-2H400-0300-031 Tesla P100
  • Memory Capacity: 16GB HBM2
  • Interface: PCI Express 3.0 x16
  • Power Usage: 15W Max / 13W Typical
  • Dimensions: 267 mm length, 112 mm height, 38 mm width
  • Operating Voltage: 100–240V AC
  • Cooling Type: Passive airflow

Key Advantages

  • High-Speed Analytics: Designed for complex analytics, simulations, and real-time computation with unmatched precision.
  • Plug-and-Play PCIe Interface: Simple to install and supports hot-swappable configurations for operational flexibility.
  • Ultra-Fast HBM2 Memory: Equipped with 16GB of next-generation High Bandwidth Memory (HBM2) for handling massive datasets and intensive workloads.
  • Exceptional Rendering Power: Accelerates rendering tasks involving textures, lighting, shadow mapping, and graphical modeling.
  • Data Center Ready: Built to meet enterprise-level standards for reliability, longevity, and consistent output under continuous load.

Memory Specifications

  • Memory Capacity: 16GB (standard)
  • Memory Type: HBM2 (High Bandwidth Memory 2)
  • Data Bandwidth: Significantly higher than traditional GDDR memory types for faster throughput
  • Optimized for: AI workloads, complex simulations, deep learning frameworks, and enterprise visualization

Display Information

  • Display Size: 47 cm (18.5 inches)
  • Resolution: HD 1366 x 768 @ 60 Hz
  • Display Dimensions: Height: 112 mm, Width: 38 mm
  • Graphics Technology: Advanced CUDA and GPU virtualization support

Expansion and Connectivity

  • Interface Type: PCI Express
  • Form Factor: Dual-slot design suitable for standard PCIe configurations
  • Length: 267 mm for optimal case fit
  • Security: Lock-ready chassis compatibility (lock sold separately)
  • Scalability: Supports multi-GPU configurations for parallel workloads

Power and Voltage Details

  • Input Voltage: 100–240V AC at 50–60Hz
  • Power Consumption: 15W maximum, 13W typical usage
  • Standby Power: Below 0.5W
  • Power Supply Type: Highly efficient external supply or system-integrated PSU

Common Applications

  • Artificial Intelligence (AI) and Machine Learning: Accelerates training and inference models with high-speed data processing.
  • Scientific Research: Supports simulation, analysis, and numerical modeling in research labs and universities.
  • Big Data Analytics: Handles massive datasets efficiently, enabling advanced insights in real time.
  • Computer-Aided Engineering (CAE): Provides faster simulation results for complex engineering projects.
  • Rendering and Visualization: Speeds up 3D rendering, texture computation, and visualization workflows.

System Compatibility

  • Compatible with Linux and Windows-based systems.
  • Optimized for multi-GPU setups in cluster environments.
  • Works seamlessly with CUDA and deep learning frameworks.
Integration Advantages
  • Easy installation and maintenance in enterprise systems.
  • Flexible scalability for growing computational demands.
  • Stable integration with advanced software stacks for AI and HPC workloads.

Category overview of NVIDIA 900-2H400-0300-031   

The NVIDIA 900-2H400-0300-031 listing commonly refers to the Tesla P100 PCI-E GPU Accelerator configured with 16 GB of HBM2 memory. This family of accelerators — based on the Pascal GP100 GPU — was designed for high performance computing (HPC), scientific simulation, deep learning inference and training, and data-center scale analytics. The PCIe form factor of the 900-2H400-0300-031 makes it suitable for a wide range of standard servers and workstation slots, providing an accessible way to add GPU compute to systems that don’t support the SXM2 form factor.

Core Features of NVIDIA Tesla P100

Accelerated Computing Power: Designed for AI, deep learning, and high-performance computing (HPC) workloads, enabling faster data processing and reduced time-to-insight.
High-Bandwidth HBM2 Memory: Integrated 16GB HBM2 memory ensures fast data access, supports heavy workloads, and optimizes performance for complex calculations.
PCI Express Interface: The PCIe 3.0 interface offers easy plug-and-play installation and efficient hot-swappable capability for flexible hardware upgrades.
Enhanced Reliability: Built for round-the-clock operation, offering consistent and reliable performance in intensive data center environments.

Detailed technical specifications

Compute and core configuration

The Tesla P100 GPU in the PCIe 16 GB configuration integrates 3,584 CUDA cores. These cores implement the Pascal architecture’s compute pipelines and support the CUDA programming model, enabling native use of NVIDIA’s CUDA libraries, cuDNN, cuBLAS, and other accelerator toolchains for scientific and machine-learning workloads. This core count positions the P100 as a strong single-GPU compute engine for double-precision and single-precision workloads in its generation. 

Memory architecture and performance

Memory is one of the standout features of this category: 16 GB of HBM2 memory implemented as CoWoS, connected over a very wide 4096-bit bus. The result is very high sustained memory bandwidth — commonly published as in the 700+ GB/s range (for example ~732 GB/s for the 16 GB CoWoS HBM2 variant). This high memory throughput is critical for large data sets, matrix operations, and training models where moving data to and from GPU DRAM is a performance bottleneck. 

System interface, form factor and power

The card uses PCI Express Gen3 x16 as the system interface (standard for server integration) and comes in a dual-slot PCIe physical form factor. Power consumption for PCIe card variants is lower than the SXM form factor of the same family; most PCIe P100 cards are rated around the ~250 W class under normal maximum loads (check OEM listings for exact configured TDP and required auxiliary power connectors). The datasheet and PCIe product brief describe the dual-slot, 10.5-inch length card and common auxiliary power connection options for server chassis compatibility. 

Floating point performance (practical metrics)

Published peak performance numbers for the P100 vary by precision mode; public datasheets list single-precision and double-precision peak TFLOPS ranges (single-precision in the 9–10 TFLOPS region and double-precision in the ~4–5 TFLOPS region for PCIe variants, depending on clock bins and boosts). When choosing accelerators for numerical workloads, note that real-world throughput depends strongly on memory bandwidth, kernel efficiency, and the ability to utilize available mixed-precision or tensor routines (where supported). 

Use cases and ideal workloads

High performance computing (HPC)

The Tesla P100 16 GB PCIe is well suited to HPC workloads that demand high double-precision throughput and large memory-bandwidth, such as numerical simulation, finite element analysis, computational fluid dynamics (CFD), weather modeling, and molecular dynamics. Its architecture and ECC-enabled HBM2 memory ensure predictable, high-integrity calculations required by scientific computing.  

Deep learning training and inference

For deep learning, the P100 delivers robust performance for training moderately large models and serves well for inference at scale. While more recent architectures (Volta, Turing, Ampere, etc.) introduced tensor cores or higher AI throughput, the P100 remains a cost-effective accelerator for customers who need high memory bandwidth and a strong mix of single- and double-precision performance. It is a common, economical choice in refurbished and mixed-generation clusters that run existing TensorFlow, PyTorch or CUDA-based pipelines.  

Data analytics and virtualization

Analytic workloads that use GPU-accelerated database engines (GPU-accelerated SQL, BlazingSQL, RAPIDS libraries) benefit from the P100’s bandwidth and memory capacity. Additionally, VMware and other virtualization ecosystems can incorporate P100 accelerators for GPU-pass-through or vGPU configurations where vendors support the Pascal family, enabling GPU acceleration for multiple virtual machines or containerized services. Always verify hypervisor compatibility when designing virtualized solutions.  

Compatibility  

Server chassis and cooling considerations

Because many P100 PCIe cards are passive-cooled (heatsink only), effective server integration requires sufficient chassis airflow and a compatible slot with adjacent intake/exhaust arrangements. Dual-slot occupancy is typical; make sure the host system’s power budget and air path are sized for a passive dual-slot accelerator rated near 250 W under sustained load. OEM vendors (e.g., HPE) may provide specific sleds, cooling ducts, or power harnesses for validated installation. 

Power connectors and electrical requirements

Check the specific OEM variant for auxiliary power connector requirements. Typical PCIe P100 cards rely on one or two 6/8-pin auxiliary connectors depending on the board design and power delivery configuration. Confirm the server’s power supply capacity and distribution to prevent undervoltage or thermal throttling during sustained compute operations. : 

Driver, CUDA and software environment

These accelerators require compatible NVIDIA drivers and a matching CUDA toolkit version for best results. For legacy Pascal devices, ensure you select a driver that supports Pascal-class GPUs and the target OS (RHEL, Ubuntu, CentOS, Windows Server, etc.). Most modern deep-learning frameworks (TensorFlow, PyTorch) continue to support Pascal GPUs through the appropriate CUDA/cuDNN stacks, but library and framework versions must match the installed CUDA runtime. 

Enhanced Processing Efficiency

The Nvidia Tesla P100 utilizes thousands of CUDA cores to deliver immense parallel processing capability. This allows scientists, engineers, and developers to process multiple operations simultaneously, leading to faster results and greater productivity across computational workloads.

Reliable PCIe 3.0 Connectivity

With PCI Express 3.0 x16 interface support, the Tesla P100 provides superior bandwidth for data transfer between the CPU and GPU. This ensures minimized latency, high throughput, and efficient scaling across complex computing nodes.

Memory Performance and Technology

The built-in 16GB HBM2 (High Bandwidth Memory 2) is a cornerstone of this GPU’s speed. It supports rapid data access and efficient multitasking across diverse computing frameworks. Ideal for rendering 3D visualizations, running AI inferences, and managing extensive scientific data, this memory technology allows the Tesla P100 to operate smoothly under heavy load.

Comparisons and alternatives

P100 vs newer NVIDIA architectures (Volta, Turing, Ampere)

While the Tesla P100 provides strong memory bandwidth and balanced FP32/FP64 performance for its generation, newer NVIDIA families introduced features that may be relevant depending on your workload:

Volta (e.g., V100) added Tensor Cores for mixed-precision training acceleration and higher FP64 tops in some SKUs.
Turing broadened RT and tensor capabilities for inference and graphics-oriented tasks.
Ampere and later improved raw TFLOPS, energy efficiency, and multi-instance GPU partitioning in many models.

If your workload benefits from native tensor cores, INT8/FP16 inference accelerations, or the latest NVLink versions, evaluate newer architectures. However, for projects where cost and memory bandwidth are the priority and existing software stacks are certified on Pascal, P100 remains a practical choice.  

P100 PCIe vs P100 SXM variants

The SXM form factor of the P100 is designed for denser, higher-power configurations and supports NVLink interconnect for multi-GPU scaling with much higher inter-GPU bandwidth, but it requires SXM-compatible server platforms. The PCIe variant (which includes the 900-2H400-0300-031 OEM SKU) targets broader compatibility with standard server PCIe slots at the expense of the interconnect and maximum power ceiling of the SXM product. Choose SXM if you need the absolute maximum inter-GPU bandwidth and the OEM/server platform supports it; choose PCIe for easier drop-in upgrades in standard servers.

Reliability  

ECC and data integrity

The Tesla P100 family introduced ECC support for HBM2, enabling error detection and correction that is important for long-running scientific and financial calculations where bit errors can compromise results. ECC behavior and enablement are typically controlled through driver and BIOS settings and should be validated in production runs.  

Cutting-Edge Technology for Professionals

Enables seamless machine learning training and inference with industry-leading CUDA cores.
Facilitates parallel computing for efficient scientific visualization and simulation.
Supports demanding applications in engineering, weather modeling, and molecular dynamics.
Reduces total cost of ownership by maximizing GPU utilization and energy efficiency.

Optimized Rendering Workflows

The Tesla P100 delivers rapid processing of graphical elements such as frame buffers, lighting data, and shader computations. Its ability to maintain real-time rendering under heavy workloads makes it a preferred choice for visualization professionals and scientific researchers alike.

Features
Manufacturer Warranty:
None
Product/Item Condition:
Excellent Refurbished
ServerOrbit Replacement Warranty:
1 Year Warranty