Model (Code name) |
Release date | Architecture & fab |
Transistors & die size |
Core | Fillrate[a] | Vector Processing power[a][b] (TFLOPS) |
Matrix Processing power[a][b] (TFLOPS) |
Memory | TBP | Software Interface |
Physical Interface | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Config[c] | Clock[a] (MHz) |
Texture[d] (GT/s) |
Pixel[e] (GP/s) |
Half (FP16) | Single (FP32) | Double (FP64) | INT8 | BF16 | FP16 | FP32 | FP64 | Bus type & width |
Size (GB) |
Clock (MT/s) |
Bandwidth (GB/s) | |||||||
Tesla V100 (PCIE) (GV100)[1][2] |
May 10, 2017 | Volta TSMC 12 nm |
12.1×109 815 mm2 |
5120:320:128:640 80 SM |
1370 | 438.4 | 175.36 | 28.06 | 14.03 | 7.01 | N/A | N/A | N/A | 112.23 | N/A | HBM2 4096 bit |
16 32 |
1750 | 900 | 250 W | PCIe 3.0 ×16 |
PCIe ×16 |
Tesla V100 (SXM) (GV100)[3][4] |
May 10, 2017 | 1455 | 465.6 | 186.24 | 29.80 | 14.90 | 7.46 | N/A | N/A | N/A | 119.19 | N/A | 300 W | NVLINK | SXM2 | |||||||
Radeon Instinct MI50 (Vega 20)[5][6][7][8][9][10] |
Nov 18, 2018 | GCN 5 TSMC 7 nm |
13.2×109 331 mm2 |
3840:240:64 60 CU |
1450 1725 |
348.0 414.0 |
92.80 110.4 |
22.27 26.50 |
11.14 13.25 |
5.568 6.624 |
N/A | N/A | 26.5 | 13.3 | ? | HBM2 4096-bit |
16 32 |
2000 | 1024 | 300 W | PCIe 4.0 ×16 |
PCIe ×16 |
Radeon Instinct MI60 (Vega 20)[6][11][12][13] |
4096:256:64 64 CU |
1500 1800 |
384.0 460.8 |
96.00 115.2 |
24.58 29.49 |
12.29 14.75 |
6.144 7.373 |
N/A | N/A | 32 | 16 | ? | ||||||||||
Tesla A100 (PCIE) (GA100)[14] [15] |
May 14, 2020 | Ampere TSMC 7 nm |
54.2×109 826 mm2 |
6912:432:-:432 108 SM |
1065 1410 |
460.08 609.12 |
- | 58.89 77.97 |
14.72 19.49 |
7.36 9.75 |
942.24 1247.47 |
235.56 311.87 |
235.56 311.87 |
117.78 155.93 |
14.72 19.49 |
HBM2 5120 bit |
40 80 |
3186 | 2039 | 250 W | PCIe 4.0 ×16 |
PCIe ×16 |
Tesla A100 (SXM) (GA100))[16] [17] |
1275 1410 |
550.80 609.12 |
- | 70.50 77.97 |
17.63 19.49 |
8.81 9.75 |
1128.04 1247.47 |
282.01 311.87 |
282.01 311.87 |
141.00 155.93 |
17.63 19.49 |
400 W | NVLINK | SXM4 | ||||||||
AMD Instinct MI100 (Arcturus)[18][19] |
Nov 16, 2020 | CDNA TSMC 7 nm |
25.6×109 750 mm2 |
7860:480:-:480 120 CU |
1000 1502 |
480 720.96 |
- | ? | 15.72 23.10 |
7.86 11.5 |
122.88 184.57 |
61.44 92.28 |
122.88 184.57 |
30.72 46.14 |
15.36 23.07 |
HBM2 4096-bit |
32 | 2400 | 1228 | 300 W | PCIe 4.0 ×16 |
PCIe ×16 |
AMD Instinct MI250X (PCIE) (Aldebaran) |
Nov 8, 2021 | CDNA 2 TSMC 6 nm |
58×109 1540 mm2 |
14080:880:-:880 220 CU | ||||||||||||||||||
AMD Instinct MI250X (OAM) (Aldebaran) | ||||||||||||||||||||||
Tesla H100 (PCIE) (GH100) |
Mar 22, 2022 | Hopper TSMC 4 nm |
80×109 814 mm2 | |||||||||||||||||||
Tesla H100 (SXM) (GH100) |
- ^ a b c d Boost values (if available) are stated below the base value in italic.
- ^ a b Precision performance is calculated from the base (or boost) core clock speed based on a FMA operation.
- ^ Unified shaders : Texture mapping units : Render output units : AI accelerators and Compute units (CU) / Streaming multiprocessors (SM)
- ^ Texture fillrate is calculated as the number of texture mapping units multiplied by the base (or boost) core clock speed.
- ^ Pixel fillrate is calculated as the number of render output units multiplied by the base (or boost) core clock speed.
Template documentation
You can | .
Common place to discuss layout and style of the AMD GPU tables at: Talk:List of AMD graphics processing units. |
References
- ^ Oh, Nate (December 16, 2022). "Nvidia Formally Announced PCIe Tesla V100". AnandTech.
- ^ "NVIDIA Tesla V100 PCIe 16GB". TechPowerUp.
- ^ Smith, Ryan (December 19, 2022). "Nvidia Volta Unveiled". AnandTech.
- ^ "NVIDIA Tesla V100 SXM3 32GB". TechPowerUp.
- ^ Walton, Jarred (January 10, 2019). "Hands on with the AMD Radeon VII". PC Gamer.
- ^ a b "Next Horizon – David Wang Presentation" (PDF). AMD.
- ^ "AMD Radeon Instinct MI50 Accelerator (16GB)". AMD.
- ^ "AMD Radeon Instinct MI50 Accelerator (32GB)". AMD.
- ^ "AMD Radeon Instinct MI50 Datasheet" (PDF). AMD.
- ^ "AMD Radeon Instinct MI50 Specs". TechPowerUp. Retrieved May 27, 2022.
- ^ "Radeon Instinct MI60". AMD. Archived from the original on November 22, 2018. Retrieved May 27, 2022.
- ^ "AMD Radeon Instinct MI60 Datasheet" (PDF). AMD.
- ^ "AMD Radeon Instinct MI60 Specs". TechPowerUp. Retrieved May 27, 2022.
- ^ "Nvidia A100 Tensor Core GPU Archiecture" (PDF). Nvidia. Retrieved December 12, 2022.
- ^ "Nvidia A100 PCIE 80 GB Specs". TechPowerUp. Retrieved Dec 12, 2022.
- ^ "Nvidia A100 Tensor Core GPU Archiecture" (PDF). Nvidia. Retrieved December 12, 2022.
- ^ "Nvidia A100 SXM4 80 GB Specs". TechPowerUp. Retrieved Dec 12, 2022.
- ^ "AMD Instinct MI100 Brochure" (PDF). AMD. Retrieved December 25, 2022.
- ^ "AMD CDNA Whitepaper" (PDF). AMD. Retrieved December 25, 2022.