Menu Content/Inhalt
Tesla P100 Print
April 2018
2NTP100P16.PNG

 


HPC data centers need to support the ever-growing demands of scientists and researchers while staying within a tight budget. The old approach of deploying lots of commodity compute nodes requires huge interconnect overhead that substantially increases costs without proportionally increasing performance.

NVIDIA Tesla P100 GPU accelerators are the most advanced ever built, powered by the breakthrough NVIDIA Pascal™ architecture and designed to boost throughput and save money for HPC and hyperscale data centers. The newest addition to this family, Tesla P100 for PCIe enables a single node to replace half a rack of commodity CPU nodes by delivering lightning-fast performance in a broad range of HPC applications.


TESLA P100 Accelerator Features and Benefits

PASCAL ARCHITECTURE
More than 18.7 TeraFLOPS of FP16, 4.7 TeraFLOPS of double-precision, and 9.3 TeraFLOPS of singleprecision performance powers new possibilities in deep learning and HPC workloads.
COWOS HBM2 Compute and data are integrated on the same package using Chip-on- Wafer-on-Substrate with HBM2 technology for 3X memory performance over the previous-generation architecture.
PAGE MIGRATION ENGINE Simpler programming and computing performance tuning means that applications can now scale beyond the GPU’s physical memory size to virtually limitless levels.

Specifications:

Specification Description
GPU Architecture
NVIDIA Pascal™ 
Cuda Cores
3584
Single-Precision Performance
9.3 TeraFLOPS
Double-Precision Performance 4.7 TeraFLOPS
GPU Memory
16GB GDDR5
Memory bus width
4096-bit
Memory bandwidth
732 GB/s
Memory clock
715 MHz
System Interface
Full Height/Length PCI Express Gen3
Max Power
250W
Form Factor
111.15 mm, (4.38 inches) (H) × 266.70 mm, (10.5 inches) (L)