The team at OCF worked on a very interesting project recently, which saw us design, integrate and configure a brand new HPC system in a custom-built shipping container style data centre.
Existing departmental resources within the Life Sciences and Physics departments were coming to the end of life, so using funding from both the University and a large grant from the Medical Research Council, the University set out to acquire a new, central core HPC resource to support researchers from across the University.
As part of the original tender, the University asked for options to provide temporary housing for the new HPC machine whilst work on a new data hall was being finished. Being the technical innovators that we are, we proposed a unique solution to house the new HPC machine, named Isca after the Roman name for Exeter, in a shipping container solution from Stulz Technology Integration Limited.
Nicknamed The Pod, the data centre was a highly specialised, custom-fabricated Rapid Deployment Data Centre (RDDC) container, providing the power and cooling needed to run today’s sophisticated HPC systems. This was phase one of the new HPC machine.
We designed, integrated and configured the HPC machine and had the entire system delivered in its container to the University in 2016, where it lived on campus until the summer of 2017.
Over the course of the year, the University’s HPC architects and researchers tested and used the system to give them an understanding of how it was used. This, along with some advice from us, informed phase two of the project to expand the system and move it to its final location in the new data centre hall on campus.
The final system takes advantage of Lenovo’s NeXtScale servers, connected through Mellanox EDR Infiniband to three GS7K parallel file system appliances from DDN Storage. OCF’s own Open Source HPC Software Stack, based on xCAT, runs on the system along with RDO OpenStack, NICE DCV and Adaptive Computing MOAB.
As well as having the standard nodes, Isca also has various pieces of specialist kit which includes NVIDIA GPU nodes, Intel Xeon Phi nodes and OpenStack cloud nodes as well.
Technical Architect, David Barker, from the University told us: “We wanted to ensure that the new system caters for as wide a variety of research projects as possible, so the system reflects the diversity of the applications and requirements our users have.”
The impact on research has been significant, with researchers seeing projects running 2-3x quicker than compared to the previous departmental clusters.
The two-phased approach worked well for Exeter, enabling them to identify the use cases and technologies that would benefit researchers across many disciplines at the University.
If you’d like to know more about Exeter and Isca, you can read more in depth articles about the project in Technology Networks, University Business, or DataCenter Dynamics. I’d love to hear your thoughts too – what’s the most unusual data centre location you’ve seen? Comment below or get in touch here.