×

Registration

Profile Informations

Login Datas

or login

First name is required!
Last name is required!
First name is not valid!
Last name is not valid!
This is not an email address!
Email address is required!
This email is already registered!
Password is required!
Enter a valid password!
Please enter 6 or more characters!
Please enter 16 or less characters!
Passwords are not same!
Terms and Conditions are required!
Email or Password is wrong!

AMD Announces Instinct MI100 As World's Fastest HPC GPU With 11.5 TFLOPs Peak FP64 Performance

amd instint mi100

AMD is really throwing down the gauntlet across all computing fields that it competes in. The Ryzen 5000processor family has already been shown to dominate Intel's best Comet Lake-S desktop offerings, and AMD is promising to battle NVIDIA toe-to-toe with the Radeon RX 6000 Series. AMD's competitive spirit is also alive in in the high-performance computing (HPC) market with the introduction of the Instinct MI100 accelerator.

Information regarding the Instinct MI100 first leaked out earlier this month, but AMD is giving us our first official look at the card. Like the Radeon RX 6000 Series, the Instinct MI100 is an absolute beast, with AMD claiming that it is the world's fastest HPC GPU. And for those that are keeping score, the company is claiming dominance over NVIDIA's Ampere-based A100.

AMD is claiming that the Instinct MI100, which is using the company's first-generation CDNA architecture, offers an unmatched FP64 and FP32 performance for HPC workloads, coming in at 11.5 TFLOPS (compared to 9.7 for A100) and 23.1 TFLOPS (compared to 19.5 for A100) respectively. The Instinct MI100 is able to deliver this kind of performance thanks to the powerful hardware under the hood: namely a staggering 120 compute units (7680 stream processors), which is 40 more than what's found in the mighty Radeon RX 6900 XT. 

The GPU, which is built on a 7nm FinFET process node, has a peak engine clock of 1,502MHz and comes with 32GB of HBM2 EEC memory onboard. The HBM2 uses a 4,096-bit memory interface and is clocked at 1.2GHz offering up to 1,228.8GB/sec of bandwidth. The Instinct MI100 is of course PCIe 4.0-based, and features a 2-slot design that accepts two 8-pin power inputs. And despite total board power of 300W, the card Instinct MI100 feature passive cooling.

If that isn't enough, here are some more specs that AMD handed down for the Instinct MI100:

  • Up to 46.1 TFLOPs FP32 Matrix Peak Performance with All-New Matrix Cores for HPC & AI Workloads
  • Up to 184.6 TFLOPs FP16 & 92.3 TFLOPs bFloat16 Peak for Ultra-Fast AI Training
  • 2nd Gen Infinity Architecture with up to 340 GB/s of aggregate P2P GPU I/O bandwidth

“Today AMD takes a major step forward in the journey toward exascale computing as we unveil the AMD Instinct MI100 – the world’s fastest HPC GPU,” said Brad McCredie, corporate vice president, Data Center GPU and Accelerated Processing, AMD. “Squarely targeted toward the workloads that matter in scientific computing, our latest accelerator, when combined with the AMD ROCm open software platform, is designed to provide scientists and researchers a superior foundation for their work in HPC.”

As you might expect, AMD has a lot of customers already lined up to use the Instinct MI100, including big names like Dell, Hewlett-Packard Enterprise, Supermicro, and Gigabyte. The cards will first be available from these OEMs and ODM partners by the end of 2020 according to AMD.

Go to Source