"ARCHER2 should be capable on average of over eleven times the science throughput of ARCHER, based on benchmarks which use five of the most heavily used codes on the current service," UKRI stated in an email.
ARCHER2 will employ 5,848 liquid-cooled Shasta Mountain compute nodes, each of which houses a pair of 64-core/128-thread EPYC processors clocked at 2.2GHz. In total, ARCHER2 will crunch through workloads using 748,544 cores and 1.57 petabytes of memory. UKRI estimates peak performance to be around 28 PFLOP/s.
Here are some additional stats...
- 14.5 PBytes of Lustre work storage in 4 file systems
- 1.1 PByte all-flash Lustre BurstBuffer file system
- 1+1 PByte home file system in Disaster Recovery configuration using NetApp FAS8200
- Cray next-generation Slingshot 100Gbps network in a diameter-three dragonfly topology, consisting of 46 compute groups, 1 I/O group and 1 Service group
- Shasta River racks for management and post processing
- Test and Development System (TDS) platform, to be installed in advance
- Collaboration platform with 4 x compute nodes attached to 16 x Next Generation AMD GPUs
- Software stack:
—Cray Programming Environment including optimizing compilers and libraries for the AMD —Rome CPU
—Cray Linux Environment optimized for the AMD CPU blade based on SLES 15
—Shasta Software Stack
—SLURM work load manager
—CrayPat as profiler
—GDB4HPC as debugger
ARCHER2 will end up in the same room as the current ARCHER system, resulting in a period of downtime. Once it goes live, however, ARCHER2 will rank as Europe's most powerful supercomputer.