There was an interesting story published last month in which NVIDIA’s founder and CEO, Jensen Huang, said: “As advanced parallel-instruction architectures for CPU can be barely worked out by designers, GPUs will soon replace CPUs”.
Here’s the thing, a GPU is never going to run an accounting package for example, or Microsoft Word, it’s always going to be used as an accelerator for computational calculations. I think that needs clarifying. A GPU is only an accelerator, it’s not the main computation component of a system, so there’ll always be a need for traditional CPUs. GPUs will not be replacing CPUs soon.
And, of course, GPUs aren’t the only accelerators out there either – there are a number of different technologies such as the Xeon Phi from Intel (which can also be classed as processor by itself), FPGAs from Altera and Xilinx, and even DSPs such as those from Texas Instruments.
We do see GPUs successfully used inside of HPC and gaming. If we look at the list of TOP500 supercomputers, the majority of the systems either use GPUs or Phi to boost performance. Although a little dated, an Intersect360 Research report published in 2015 found that a third of HPC systems were equipped with accelerators. 80% of the accelerator types used were GPUs with NVIDIA having 78% of that market.
The real boon for NVIDIA, ARM and their GPUs is the growth of Artificial Intelligence and Machine Learning, where GPUs have become the go to technology to accelerate the algorithms. In fact, so much so, that NVIDIA is prepared to bet billions on its technology driving this new era of data intensive computing.
Whilst I’m singing the praises of replacing CPUs with GPUs for certain applications, it’s not simply case of replacing one with the other – there are other challenges that are worth noting.
Firstly, there are power requirements to consider. It’s somewhat of a trade off and does depend on the constraints such as the size of the HPC system, physical space in the data centre and power to the system – modern data centres only have around 5-10kw of power. Using GPUs in the HPC data centre in place of CPUs can dramatically increase the power requirements needed.
Two, the bottleneck between CPU and GPU – providing data to the accelerator fast enough – is another challenge to contend with. NVLink does go some way to solve this problem. Traditionally, data has to come over the PCIe bus to the accelerator, at 32 GB/s. NVlink provides 80 GB/s more between the GPU and CPU than the traditional PCIe, but there’s a still a bottleneck to contend with.
GPUs are essential in HPC and will have a big impact on the evolution of AI, Machine Learning and other applications. Scientists, researchers, universities and research institutes all know that speeding up applications is nothing but good for business – and research – so GPUs are here to stay. However, they won’t substitute CPUs for everything and certainly not for desktop computing. And, they’re not the only accelerators around!
If you have a view to share, I’d love to hear it.