Telephone: 0114 257 2200

The quest for Exascale: Have the goal posts changed in HPC?

There has been a lot of noise in the Computing Industry over Artificial Intelligence (AI) and Machine Learning (ML) over the last couple of years. The term AI has embedded itself into our culture, together with the word ‘Smart’ and internet giants like Amazon and Google have extended AI’s reach into every nook and cranny of our lives.

Behind all of this are huge computing resources which a few years ago would have been the domain of CPUs, but today NVIDIA and to some extent FPGA rule the roost in this new world of neural networks.

How does this fit in with HPC? Computer chip manufacturers have not been standing around idly. IBM’s POWER9 server together with tighter integration from NVIDA’s GPUs have produced a computation powerhouse that will power a couple of the fastest systems on the planet.

ARM is making its way into the traditional datacentre and HPC market, as the Cadmium ThunderX 2 gains traction. FUJITSU is already planning to use ARM to power the Post-K machine expected in 2020, promised to be a 1,000 PFLOP beast.
Intel continues to progress in the market with its x86 processors and it appears to have discarded Xeon Phi, rolling everything into its processors, for example, AVX512. What of the Altera purchase? The most expensive Intel purchase to date? FPGAs must surely factor this into its products going forward.

The constraints of Exascale still exist as all of the above still consume a huge amount of power. However, we can see that each one is making inroads into producing better performance in a smaller package. As Jensen Huang, President and CEO at NVIDIA says “The more GPUs you buy, the more you save”, when referring to the new DGX-2 replacing 300 Dual-CPU servers with a single DGX-2 system. Clearly, the power saving is on a huge scale.

But still not all applications lend themselves to being ported to GPU use. Exascale will be achieved by a mixture of processor technologies, CPU and aided by an accelerator, (whether GPUs or something else) as data to GPUs must be fed to them by the CPU that is hosting the system and communicating with the rest of the system, storage etc.

A lot of the technologies used within AI can also be used within HPC as the GPUs, CPUs and interconnect technologies naturally lend themselves for use in either HPC or AI applications. I believe that even fp16/fp32 tensor cores will have a place in future applications.

As in the very beginning of GPGPU computing when someone realised that the graphics viewing pipeline was inherently parallel in nature and matrix math orientated and could be used for computation.

AI has brought a new class of user to HPC providers and to the non-computing proficient user. For example, the English researcher that wants to scan all of their documents to see if a particular author wrote the entire piece of work, or if a university student that is plagiarising text from the internet. A safari park was employing ML techniques from astrophysics to identify the thermal and chemical compositions of distant stars so that they could locate and count endangered animals hidden in the bush and help game reserves. AI and ML are opening up new avenues of computing for users that haven’t traditionally used HPC.

So back to my original question, is Exascale over? The Exascale goal hasn’t gone away, it just seems to be hidden by the uptake in AI.

Leave a Reply

Your e-mail address will not be published. Required fields are marked *

Recent Comments