I haven’t quite managed a full 10-years in the HPC industry yet, but I’m only a few months short and I really wanted to talk about the past decade nonetheless, just so much has happened. None of it is a surprise I’ll hasten to add [not because of a crystal ball] but because its been an evolutionary period.
In terms of interconnects I can clearly recall the evolution of using technology from Wulfkit then Myrinet [the latter we installed at University of Reading and Air Bus] and now we use InfiniBand as standard. It’s been a natural progression to ever better, faster technologies. InfiniBand has now become completely mainstream and I’m actually looking forward to seeing InfiniBand progress further to become a PC networking tool in the future. That’s my tip for you!
Processor technology manufacture has switched between two firms, from AMD in the early years, where we sold AMD clusters to the likes of University of Southampton and Red Bull, to Intel today. Intel’s Nehalem processor made a big impact, then Intel’s Westmere [of which OCF was responsible for first UK deployment in IBM iDataPlex servers] and now we’ve got Intel’s Sandy Bridge processor doing the same. AMD should have done better to combat Intel’s challenge, considering how well they were performing only a few years ago. I would really hope to see them back in HPC soon.
On the same topic of processors, I think the biggest disappointment of the last decade has been the failure of Cell processors to take hold. I remember the launch and near immediate withdrawal of IBM’s cell-based machines, the BladeCenter QS22 blade server. It is such as shame considering they had helped to power supercomputers like RoadRunner to the top of the rankings for a time. Here’s tip number two, I’m looking forward to the impact that ARM’s CPUs will have on the industry – definitely one to watch.
Software has ‘come on’ leaps and bounds too, but I think we’re still some way off making software automatically work across a cluster – with a one-click install. We’re increasingly working with the software vendors, but one-click install is still tough. By contrast, cluster implementation has become easier, one-click installations / re-installations are possible and we’ve reduced integration time for servers from 2 hours to 15 minutes.
Over the last decade what I’ve witnessed is that lots of people have now seen what high performance server clusters can do for them. As such, server clusters are becoming much more business critical – we not installing standalone systems anymore, kept isolated from the rest of the IT infrastructure. We’re being asked to integrate clusters with existing systems, firewalls, gateways and storage systems. Clusters have evolved to become a whole part of the business.