Telephone: 0114 2572 200

New HPC cluster benefitting University of Oxford

New HPC cluster benefitting University of Oxford

Researchers from across the University of Oxford will benefit from a new High Performance Computing system designed and integrated by OCF. The new, Advanced Research Computing (ARC) central HPC resource is supporting research across all four Divisions at the University; Mathematical, Physical and Life Sciences; Medical Sciences; Social Sciences; and Humanities.

With around 120 active users per month, the new HPC resource will support a broad range of research projects across the University. As well as computational chemistry, engineering, financial modeling, and data mining of ancient documents, the new cluster will be used in collaborative projects like the T2K experiment using the J-PARC accelerator in Tokai, Japan. Other research will include the Square Kilometer Array (SKA) project, and anthropologists using agent-based modeling to study religious groups. The new service will also be supporting the Networked Quantum Information Technologies Hub (NQIT), led by Oxford, envisaged to design new forms of computers that will accelerate discoveries in science engineering and medicine.

The new HPC cluster comprises of Lenovo NeXtScale servers with Intel Haswell CPUs connected by 40GB Infiniband to an existing Panasas storage system. The storage system was also upgraded by OCF to add 166TBs giving a total of 400TBs of capacity. Existing Intel Ivy Bridge and Sandy Bridge CPUs from the University of Oxford’s older machine are still running and will be merged with the new cluster.
20 NVIDIA Tesla K40 GPUs were also added at the request of NQIT, who co-invested in the new machine. This will also bring benefit to NVIDIA’s CUDA Centre of Excellence, which is also based at the University.

“After seven years of use, our old SGI-based cluster really had come to end of life, it was very power hungry, so we were able to put together a good business case to invest in a new HPC cluster,” said Dr Andrew Richards, Head of Advanced Research Computing at the University of Oxford. “W e can operate the new 5,000 core machine for almost exactly the same power requirements as our old 1,200 core machine.

The new cluster will not only support our researchers but will also be used in collaborative projects as well; we’re part of Science Engineering South, a consortium of five universities working on e-infrastructure particularly around HPC.

We also work with commercial companies who can buy time on the machine so the new cluster is supporting a whole host of different research across the region.”

Simple Linux Utility Resource Manager (SLURM) job scheduler manages the new HPC resource, which is able to support both the GPUs and the three generations of Intel CPUs within the cluster.

Julian Fielden, Managing Director at OCF comments: “W ith Oxford providing HPC not just to researchers within the University, but to local businesses and in collaborative projects, such as the T2K and NQIT projects, the SLURM scheduler really was the best option to ensure different service level agreements can be supported. If you look at the Top500 list of the World’s fastest supercomputers, they’re now starting to move to SLURM. The scheduler was specifically requested by the University to support GPUs and the heterogeneous estate of different CPUs, which the previous TORQUE scheduler couldn’t, so this forms quite an important part of the overall HPC facility.”

The University of Oxford will be officially unveiling the new cluster, named Arcus Phase B, on 14th April. Dr Richards continues: “As a central resource for the entire University, we really see ourselves as the first stepping stone into HPC. From PhD students upwards i.e. people that haven’t used HPC before – are who we really want to engage with. I don’t see our facility as just running a big machine; we’re here to help people do their research. That’s our value proposition and one that OCF has really helped us to achieve.”

Read More »

ANSYS, Lenovo and OCF build CFD-ready HPC Appliance

UK’s first ANSYS CFD-ready HPC appliance eases cluster deployment while boosting engineering productivity

UK engineers requiring High-Performance Computing (HPC) power for their Computational Fluid Dynamics (CFD) simulations can now purchase a pre-configured, easy-to-deploy, ANSYS-ready HPC ‘appliance’. The plug-and-play appliance is available following a unique partnership between engineering simulation software provider ANSYS, hardware vendor Lenovo and high performance cluster integrator OCF.

With a minimum configuration of two dual-socket compute nodes, even the smallest server cluster appliance in the portfolio can offer twice as much compute performance as even the most powerful workstation. Combined with 24×7 availability and intelligent job scheduling, the appliance can offer up to six times improvement in engineer productivity over a single workstation.

Pre-built, configured and tested with both job scheduling software and ANSYS software, the appliance provides engineers with an advanced structural analysis and CFD environment that is simple and quick to deploy.

“We’ve come across lots of engineers using workstations and laptops to process their ANSYS CFD simulations which restrict use of their device, even for email, until the job is complete,” says Andrew Dean, HPC business development manager, OCF. “The users we speak to are also unsure when their jobs will finish – it could be after a couple of hours or half a day. The job could finish in the middle of the night when they’re unavailable to start a new action. Plus, of course, the job might even have crashed, they wouldn’t know.”

He adds: “The appliance shifts simulation jobs off local workstations to a central resource enabling engineers to truly multi-task. The appliance, complete with job scheduler, enables maximum utilisation of the appliance with the possibility of submitting jobs 24/7.”

Each appliance comes pre-built with a fixed head node, chassis, memory, the latest Intel processors and fixed switches. Each appliance can be easily expanded to fit customer need, because they use blade architecture making it simple to add additional Lenovo NeXtScale compute nodes.

“Our customers are engineering experts, but that expertise doesn’t always stretch to HPC cluster selection and deployment,” says Wim Slagter, lead product manager for HPC, at ANSYS, Inc. “We want to give our customers the best possible experience and, for that reason, we are working with OCF and Lenovo to provide our customers an ANSYS-optimised cluster solution designed for ease of procurement, deployment and operation.”

Read More »

Recent Comments

    Contact Us

    OCF plc
    Unit 5 Rotunda, Business Centre,
    Thorncliffe Park, Chapeltown,
    Sheffield, S35 2PG

    Tel: +44 (0)114 257 2200
    Fax: +44 (0)114 257 0022

    OCF Hotline: 0845 702 3829

    The Innovation Centre, Sci-Tech Daresbury,
    Keckwick Lane, Daresbury,
    Cheshire, WA4 4FS

    Tel: +44 (0)1925 607 360
    Fax: +44 (0)114 257 0022

    OCF plc is a company registered in England and Wales. Registered number 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG

    Website Designed & Built by Grey Matter | web design sheffield