Telephone: 0114 2572 200
Email: info@ocf.co.uk

University of Birmingham Receives Honours in 2018 HPCwire Readers’ and Editors’ Choice Awards

Dallas, Texas  — November 12, 2018 — The University of Birmingham has been recognised in the annual HPCwire Readers’ and Editors’ Choice Awards, presented at the 2018 International Conference for High Performance Computing, Networking, Storage and Analysis (SC18), in Dallas, Texas.  The list of winners was revealed at the HPCwire booth at the event, and on the HPCwire website, located at www.HPCwire.com.   the University of Birmingham was recognized with the following honour:

  • Readers’/Editor’ Choice: Best use of HPC in Manufacturing

The coveted annual HPCwire Readers’ and Editors’ Choice Awards are determined through a nomination and voting process with the global HPCwire community, as well as selections from the HPCwire editors. The awards are an annual feature of the publication and constitute prestigious recognition from the HPC community. These awards are revealed each year to kick off the annual supercomputing conference, which showcases high performance computing, networking, storage, and data analysis.

The PRISM2 group, a research centre at the University of Birmingham, undertakes modelling of materials and manufacturing using the University’s central HPC and storage systems provided by Advanced Research Computing. The award was received for work with industrial partner Rolls Royce with whom the PRISM2 group are currently engaged in a long-term collaborative project. This project with Rolls-Royce and the innovative research conducted by PRISM2 plays a key part in ensuring that the technology employed by Rolls-Royce helps the UK stay at the forefront of advanced manufacturing, particularly in the aerospace sector.

Professor Jeffery Brooks, Director of PRISM2 and Hanson Professor of Industrial Metallurgy at the University of Birmingham said, “The models we develop for manufacturing simulation are computationally very demanding and require huge amounts of resource and the BlueBEAR HPC facility is essential in supporting my team”.

The BlueBEAR facility uses Lenovo HPC direct-to-node water cooled systems with Mellanox SwitchIB-2 based EDR InfiniBand. Storage is provided using IBM Spectrum Scale and IBM Spectrum Protect and jobs are scheduled using the SLURM scheduling system. The system is designed and integrated by the University’s Advanced Research Computing team and supplied by OCF, the University’s framework supplier for research computing systems.

OCF is a UK based high-performance compute, storage and data analytics integrator. Business Development Manager, Georgina Ellis, commented, “We’ve (OCF) worked with University of Birmingham over a number of years. The framework agreement enables the Advanced Research Computing team access to a wide variety of vendors, and the team know what they want to deliver and engage with a wide variety of partners to deliver value for the researchers at Birmingham”.

“This year marks the 15th anniversary of The HPCwire Readers’ and Editors’ Choice Awards. These awards serve as a pillar in our community, acknowledging major achievements, outstanding leadership and innovative breakthroughs.” Said Tom Tabor CEO of Tabor Communications, publisher of HPCwire.  “Receiving an HPCwire award signifies an undeniable community support and recognition. We are proud to acknowledge our winners this year and as always to allow our readers voices to be heard. I would like to personally congratulate each and every one of our winners, as their awards come well deserved.”

More information on these awards can be found at the HPCwire website (http://www.HPCwire.com) or on Twitter through the following hashtag: #HPCwireAwards.

About HPCwire
HPCwire is the #1 news and information resource covering the fastest computers in the world and the people who run them. With a legacy dating back to 1986, HPCwire has enjoyed a legacy of world-class editorial and journalism, making it the news source of choice selected by science, technology and business professionals interested in high performance and data-intensive computing. Visit HPCwire at www.hpcwire.com.

About University of Birmingham

PRISM2 is a research centre at the University of Birmingham, with expertise in the modelling of materials, manufacturing and design for high technology applications in the aerospace and power generation sectors. It is located in the Interdisciplinary Research Centre (IRC) on the University of Birmingham’s Edgbaston campus. There are strong links with the Manufacturing Technology Centre (MTC) and High Temperature Research Centre (HTRC) based at Ansty Park, Coventry. See www.prism2.org for more information.

Advanced Research Computing is part of the central IT Services department at the University of Birmingham and the team includes systems, outreach and research software engineering teams to help Birmingham’s researchers deliver world leading research. Founded in 1900, the University of Birmingham is a research-intensive University and a member of the prestigious Russell group of Universities.

See www.birmingham.ac.uk/bear for more information, tweet @uob_rescomp.

Read More »

OCF deploys UK academia’s first IBM POWER9 systems

New systems supporting research into superfluid flows and Deep Learning at Queen Mary University of London and Newcastle University.

Two Universities, Queen Mary University of London (QMUL) and Newcastle University, are the first UK academic organisations to deploy IBM’s POWER9 system, delivering unprecedented performance for modern High-Performance Computing (HPC), Analytics and Artificial Intelligence (AI) workloads. Working with OCF, the high-performance compute, storage and data analytics integrator, both Universities have taken delivery of the systems and will be integrated by OCF into existing HPC infrastructures.

Read More »

Read More »

OCF Deploys Petascale Lenovo Supercomputer at University of Southampton

Researchers from across the University of Southampton are benefitting from a new high performance computing (HPC) machine named Iridis, which has entered the Top500, debuting at 251 on the list. The new 1,300 teraflops system was designed, integrated and configured by high performance compute, storage and data analytics integrator, OCF, and will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services.

Over the past decade, the University has seen a 425 per cent increase in the number of research projects using HPC services, from across multiple disciplines such as engineering, chemistry, physics, medicine and computer science. In addition, the new HPC system is also supporting the University’s Wolfson Unit. Best known for ship model testing, sailing yacht performance and ship design software, the Unit was founded in 1967 to enable industry to benefit from the facilities, academic excellence and research activities at the University of Southampton.

Read More »

Read More »

CLIMB victorious at HPC Wire Readers’ Choice Awards

A solution designed and integrated by OCF has been announced as a winner in two categories at the 2017 HPC Wire Readers’ Choice Awards.

Announced at SuperComputing 2017 in Denver, USA, the Cloud Infrastructure for Microbial Bioinformatics, (CLIMB) has won the award for ‘Best Use of HPC in Life Sciences’ and ‘Best HPC Collaboration in Academia, Government or Industry’.

CLIMB is a UK based cloud project funded by the UK’s Medical Research Council to support research by academic microbiologists.  The current live system is located across the Universities of Birmingham, Cardiff and Warwick.

The Best Use of HPC in Life Sciences Award was awarded for real-time analysis of Zika genomes using CLIMB cloud computing, supported by Lenovo, OpenStack, IBM, Red Hat and Dell EMC.  The Best HPC Collaboration in Academia, Government or Industry was awarded to CLIMB for the provision of resources for projects that globally impact public health, using the expertise of Lenovo, OpenStack, IBM Spectrum Scale, Red Hat, and Dell EMC.

Read More »

Read More »

OCF helps to develop AI infrastructure at the University of Oxford

The University of Oxford has become the first academic institution in the UK to take delivery of an NVIDIA DGX-1 supercomputer powered by the latest GPU technology – NVIDIA Volta.

Picking up a Petaflop: (L-R) Dr. David Jenkins, Head of Research Computing and Support Services University of Oxford, Dr. Steven Young, ARC Technical Services Manager and Dr. Robert Esnouf, Director of Research Computing BDI & Head of Research Computing Core WHG

The new system has been supplied by OCF and funded via a collaboration between the University’s IT Services department, Wellcome Centre for Human Genetics (WHG), Big Data Institute (BDI) and the Weatherall Institute of Molecular Medicine (WIMM).

The system will be housed and managed by the University’s Advanced Research Computing (ARC) facility and is a response to the explosion in demand from researchers for exploring all avenues for applying deep learning to research. For the life sciences, current research includes more accurate sequencing, predicting gene expression levels, simulating brain activity, predicting outbreaks of diseases such as malaria and analysing population-scale data such as those from the UK Biobank study. In other disciplines it includes research into autonomous vehicles, natural language processing and computer vision.

Read More »

Read More »

OCF supercomputer speeds up research at the University of Exeter

Researchers from across the University of Exeter are benefitting from a new High Performance Computing (HPC) machine, called Isca. Existing departmental HPC resources within Life Sciences and Physics were coming to the end of life, so using funding from the University and a large grant from the Medical Research Council, the University acquired a new, central core HPC resource to support researchers University-wide across numerous disciplines.

The new system has already been contributing to research into the modelling and formation of stars and galaxies, using Computational Fluid Dynamics (CFD) within Engineering to understand how flooding affects bridges, as well as being used in the Medical School looking at genetic traits in diabetes using data from the UK Biobank. The HPC resource is now in use by more than 200 researchers across 30+ active research projects in Life Sciences, Engineering, Mathematics, Astrophysics, and Computing departments.

Read More »

Read More »

OCF achieves Elite Partner status with NVIDIA

OCF has successfully achieved Elite Partner status with NVIDIA® for Accelerated Computing, becoming only the second business partner in Northern Europe to achieve this level.

Awarded in recognition of OCF’s ability and competency to integrate a wide portfolio of NVIDIA’s Accelerated Computing products including TESLA® P100 and DGX-1™, the Elite Partner level is only awarded to partners that have the knowledge and skills to support the integration of GPUs, as well as the industry reach to support and attract the right companies and customers using accelerators.

“For customers using GPUs, or potential customers, earning this specialty ‘underwrites’ our service and gives them extra confidence that we possess the skills and knowledge to deliver the processing power to support their businesses,” says Steve Reynolds, Sales Director, OCF plc. “This award complements OCF’s portfolio of partner accreditations and demonstrates our commitment to the vendor.”

Read More »

Read More »

OCF deliver new 600 Teraflop HPC machine for University of Bristol

For over a decade the University of Bristol has been contributing to world-leading and life changing scientific research using High Performance Computing (HPC), having invested over £16 million in HPC and research data storage. To continue meeting the needs of its researchers working with complex and large amounts of data, they will now benefit from a new HPC machine, named BlueCrystal 4 (BC4).

Designed, integrated and configured by the HPC, storage and data analytics integrator OCF, BC4 has more than 15,000 cores making it the largest UK University system by core count and a theoretical peak performance of 600 Teraflops.

Over 1,000 researchers in areas such as paleobiology, earth science, biochemistry, mathematics, physics, molecular modelling, life sciences, and aerospace engineering will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body.

“We have researchers looking at whole-planet modelling with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from,” said Dr Christopher Woods, EPSRC Research Software Engineer Fellow, University of Bristol. “Early benchmarking is showing that the new system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. That’s a massive improvement that’ll be a great benefit to research at the University.”

BC4 uses Lenovo NeXtScale compute nodes, each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two NVIDIA Pascal P100 GPUs plus one GPU login node, designed into the rack by Lenovo’s engineering team to meet the specific requirements of the University.

Connecting the cluster are several high-speed networks, the fastest of which is a two-level Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM.

Effective benchmarking and optimisation, using the benchmarking capabilities of Lenovo’s HPC research centre in Stuttgart, the first of its kind, has ensured that BC4 is highly efficient in terms of physical footprint, while fully utilising the 30KW per rack energy limit. Lenovo’s commitment to third party integration has allowed the University to avoid vendor lock-in while permitting new hardware to be added easily between refresh cycles.

Dr Christopher Woods continues: “To help with the interactive use of the cluster, BC4 has a visualisation node equipped with NVIDIA Grid vGPUs so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.”

Housed at VIRTUS’ LONDON4, the UK’s first shared data centre for research and education in Slough, BC4 is the first of the University’s supercomputers to be held at an independent facility. The system is directly connected to the Bristol campus via JISC’s high speed Janet network. Kelly Scott, account director, education at VIRTUS Data Centres said, “LONDON4 is specifically designed to have the capacity to host ultra high density infrastructure and high performance computing platforms, so an ideal environment for systems like BC4. The University of Bristol is the 22nd organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.”

Currently numbering in the hundreds, applications running on the University’s previous cluster will be replicated onto the new system, which will allow researchers to create more applications and better scaling software. Applications are able to be moved directly onto BC4 without the need for re-engineering.

“We’re now in our tenth year of using HPC in our facility. We’ve endeavoured to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,” commented Caroline Gardiner, Academic Research Facilitator at the University of Bristol.

Simon Burbidge, Director of Advanced Computing comments: “It is with great excitement that I take on the role of Director of Advanced Computing at this time, and I look forward to enabling the University’s ambitious research programmes through the provision of the latest computational techniques and simulations.”

Due to be launched at an event on 24th May at the University of Bristol, BC4 will house over 1,000 system users, carried over from BlueCrystal Phase 3.

Read More »

Supporting scientific research at the Atomic Weapons Establishment

AWE benefiting from new end-to-end IBM Spectrum Scale and POWER8 systems

We are pleased to announce that we are supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC, cluster and separate big data storage system.

AWE has been synonymous with science, engineering and technology excellence in support of the UK’s nuclear deterrent for more than 60 years. AWE, working to the required Ministry of Defence programme, provides and maintains warheads for the Trident nuclear deterrent.

The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.

“Our work to maintain and support the Trident missile system is undertaken without actual nuclear testing, which has been the case ever since the UK became a signatory to the Comprehensive Nuclear Test Ban Treaty (CTBT); this creates extraordinary scientific and technical challenges – something we’re tackling head on with OCF,” comments Paul Tomlinson, HPC Operations at AWE. “We rely on cutting-edge science and computational methodologies to verify the safety and effectiveness of the warhead stockpile without conducting live testing. The new HPC system will be vital in this ongoing research.”

From the initial design and concept to manufacture and assembly, AWE works across the entire life cycle of warheads through the in-service support to decommissioning and disposal, ensuring the maximum safety and protecting national security at all times.

The central data storage, Cedar 3, will be in use for scientists across the AWE campus, with data replicated across the site.

“The work of AWE is of national importance and so its team of scientists need complete faith and trust in the HPC and big data systems in use behind the scenes, and the people deploying the technology,” says Julian Fielden, managing director, OCF. “Through our partnership with IBM, and the people, skills and expertise of our own team, we have been able to deliver a system which will enable AWE maintain its vital research,”

The new HPC system runs on a suite of IBM POWER8 processor-based Power systems servers running the IBM AIX V7.1 and Red Hat Enterprise Linux operating system. The HPC platform consists of IBM Power E880, IBM Power S824L, IBM Power S812L and IBM Power S822 servers to provide ample processing capability to support all of AWE’s computational needs and an IBM tape library device to back up computation data.

Cedar 3, AWE’s parallel file system storage, is an IBM Storwize storage system. IBM Spectrum Scale is in use to enable AWE to more easily manage data access amongst multiple servers.

About the Atomic Weapons Establishment (AWE)
The Atomic Weapons Establishment has been central to the defence of the United Kingdom for more than 60 years through its provision and maintenance of the warheads for the country’s nuclear deterrent. This encompasses the initial concept, assessment and design of the nuclear warheads, through component manufacture and assembly, in-service support, decommissioning and then disposal.

Around 4,500 staff are employed at the AWE sites together with over 2,000 contractors. The workforce consists of scientists, engineers, technicians, crafts-people and safety specialists, as well as business and administrative experts – many of whom are leaders in their field. The AWE sites and facilities are government owned but the UK Ministry of Defence (MOD) has a government-owned contractor-operated contract with AWE Management Limited (AWE ML) to manage the day-to-day operations and maintenance of the UK’s nuclear stockpile. AWE ML is formed of three shareholders – Lockheed Martin, Serco and Jacobs Engineering Group. For further information, visit: http://www.awe.co.uk

Read More »

eMedLab Shortlisted for UK Cloud Award

UK Cloud Awards Finalist logo

Congratulations eMedLab on being shortlisted for the UK Cloud Awards 2017

A solution designed and integrated by OCF has been shortlisted in the 2017 UK Cloud Awards in the ‘Best Public Sector Project’ category.

The MRC eMedLab consortium consists of University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute, the EMBL European Bioinformatics Institute and King’s College London and was funded by the Medical Research Council (£8.9M).

The vision of MRC eMedLab is to maximise the gains for patients and for medical research that will come from the explosion in human health data. To realise this potential, the consortium of seven prestigious biomedical research organisations need to accumulate medical and biological data on an unprecedented scale and complexity, to coordinate it, to store it safely and securely, and to make it readily available to interested researchers.

The partnership’s aim was to build a private cloud infrastructure for the delivery of significant computing capacity and storage to support the analysis of biomedical genomics, imaging and clinical data.  Initially, its main focus was on a range of diseases such as cancer, cardiovascular and rare diseases, subsequently broadening it out to include neurodegenerative and infectious diseases.

The MRC eMedLab system is a private cloud with significant data storage capacity and very fast internal networking designed specifically for the types of computing jobs used in biomedical research. The new high-performance and big data environment consists of:

  • Red Hat Enterprise Linux OpenStack Platform
  • Red Hat Satellite
  • Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
  • Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and previous versions backup.

The project has become a key infrastructure resource for the Medical Research Council (MRC), which has funded six of these projects. The success has been attributed to MRC eMedLab’s concept of partnership working where everybody is using one shared resource. This means not just sharing the HPC resource and sharing it efficiently, but also sharing the learning, the technology and the science at MRC eMedLab. Jacky Pallas, Director of Research Platforms, UCL, comments,From the beginning there was an excellent partnership between the MRC eMedLab operations team and the technical specialists at OCF, working together to solve the issues which inevitably arise when building and testing a novel compute and data storage system.”

In total, there are over 20 different projects running on the MRC eMedLab infrastructure which include:

  • The London School of Hygiene & Tropical Medicine is working on a project looking at population levels and the prevalence of HIV and TB, how the pathogen/bacteria evolve and the genetics of human resistance. This research is done in collaboration with researchers in Africa and Vietnam
  • Francis Crick Institute cancer based science – supporting a project run by Professor Charles Swanton investigating personalised immunotherapies against tumours
  • Great Ormond Street Hospital – collaboration on research on rare diseases in children
  • Linking genomics and brain imaging to better understand dementia
  • Studying rare mitochondrial diseases and understanding how stem cells function
  • Projects using the computing infrastructure use UK Biobank data to identify and improve treatments for cardiovascular diseases
  • Deep mining of cancer genomics data to understand how cancer tumours evolve
  • Analysing or looking at virus genome sequences to enable the modelling and monitoring of infectious flu type epidemic

The MRC eMedLab private cloud has shown that these new computing technologies can be used effectively to support research in the life sciences sector.

Professor Taane Clark, Professor of Genomics and Global Health, London School of Hygiene and Tropical Medicine comments, “The processing power of the MRC eMedLab computing resource has improved our ability to analyse human and pathogen genomic data, and is assisting us with providing insights into infectious disease genomics, especially in malaria host susceptibility, tuberculosis drug resistance and determining host-pathogen interactions.”

Read More »

Recent Comments

    Contact Us

    HEAD OFFICE:
    OCF plc
    Unit 5 Rotunda, Business Centre,
    Thorncliffe Park, Chapeltown,
    Sheffield, S35 2PG

    Tel: +44 (0)114 257 2200
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    SUPPORT DETAILS:
    OCF Hotline: 0845 702 3829
    E-Mail: support@ocf.co.uk
    Helpdesk: support.ocf.co.uk

    DARESBURY OFFICE:
    The Innovation Centre, Sci-Tech Daresbury,
    Keckwick Lane, Daresbury,
    Cheshire, WA4 4FS

    Tel: +44 (0)1925 607 360
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    OCF plc is a company registered in England and Wales. Registered number 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG

    Website Designed & Built by Grey Matter | web design sheffield