Telephone: 0114 2572 200
Email: info@ocf.co.uk

OCF deploys UK academia’s first IBM POWER9 systems

New systems supporting research into superfluid flows and Deep Learning at Queen Mary University of London and Newcastle University.

Two Universities, Queen Mary University of London (QMUL) and Newcastle University, are the first UK academic organisations to deploy IBM’s POWER9 system, delivering unprecedented performance for modern High-Performance Computing (HPC), Analytics and Artificial Intelligence (AI) workloads. Working with OCF, the high-performance compute, storage and data analytics integrator, both Universities have taken delivery of the systems and will be integrated by OCF into existing HPC infrastructures.

Read More »

Read More »

OCF Deploys Petascale Lenovo Supercomputer at University of Southampton

Researchers from across the University of Southampton are benefitting from a new high performance computing (HPC) machine named Iridis, which has entered the Top500, debuting at 251 on the list. The new 1,300 teraflops system was designed, integrated and configured by high performance compute, storage and data analytics integrator, OCF, and will support research demanding traditional HPC as well as projects requiring large scale deep storage, big data analytics, web platforms for bioinformatics, and AI services.

Over the past decade, the University has seen a 425 per cent increase in the number of research projects using HPC services, from across multiple disciplines such as engineering, chemistry, physics, medicine and computer science. In addition, the new HPC system is also supporting the University’s Wolfson Unit. Best known for ship model testing, sailing yacht performance and ship design software, the Unit was founded in 1967 to enable industry to benefit from the facilities, academic excellence and research activities at the University of Southampton.

Read More »

Read More »

CLIMB victorious at HPC Wire Readers’ Choice Awards

A solution designed and integrated by OCF has been announced as a winner in two categories at the 2017 HPC Wire Readers’ Choice Awards.

Announced at SuperComputing 2017 in Denver, USA, the Cloud Infrastructure for Microbial Bioinformatics, (CLIMB) has won the award for ‘Best Use of HPC in Life Sciences’ and ‘Best HPC Collaboration in Academia, Government or Industry’.

CLIMB is a UK based cloud project funded by the UK’s Medical Research Council to support research by academic microbiologists.  The current live system is located across the Universities of Birmingham, Cardiff and Warwick.

The Best Use of HPC in Life Sciences Award was awarded for real-time analysis of Zika genomes using CLIMB cloud computing, supported by Lenovo, OpenStack, IBM, Red Hat and Dell EMC.  The Best HPC Collaboration in Academia, Government or Industry was awarded to CLIMB for the provision of resources for projects that globally impact public health, using the expertise of Lenovo, OpenStack, IBM Spectrum Scale, Red Hat, and Dell EMC.

Read More »

Read More »

OCF helps to develop AI infrastructure at the University of Oxford

The University of Oxford has become the first academic institution in the UK to take delivery of an NVIDIA DGX-1 supercomputer powered by the latest GPU technology – NVIDIA Volta.

Picking up a Petaflop: (L-R) Dr. David Jenkins, Head of Research Computing and Support Services University of Oxford, Dr. Steven Young, ARC Technical Services Manager and Dr. Robert Esnouf, Director of Research Computing BDI & Head of Research Computing Core WHG

The new system has been supplied by OCF and funded via a collaboration between the University’s IT Services department, Wellcome Centre for Human Genetics (WHG), Big Data Institute (BDI) and the Weatherall Institute of Molecular Medicine (WIMM).

The system will be housed and managed by the University’s Advanced Research Computing (ARC) facility and is a response to the explosion in demand from researchers for exploring all avenues for applying deep learning to research. For the life sciences, current research includes more accurate sequencing, predicting gene expression levels, simulating brain activity, predicting outbreaks of diseases such as malaria and analysing population-scale data such as those from the UK Biobank study. In other disciplines it includes research into autonomous vehicles, natural language processing and computer vision.

Read More »

Read More »

OCF supercomputer speeds up research at the University of Exeter

Researchers from across the University of Exeter are benefitting from a new High Performance Computing (HPC) machine, called Isca. Existing departmental HPC resources within Life Sciences and Physics were coming to the end of life, so using funding from the University and a large grant from the Medical Research Council, the University acquired a new, central core HPC resource to support researchers University-wide across numerous disciplines.

The new system has already been contributing to research into the modelling and formation of stars and galaxies, using Computational Fluid Dynamics (CFD) within Engineering to understand how flooding affects bridges, as well as being used in the Medical School looking at genetic traits in diabetes using data from the UK Biobank. The HPC resource is now in use by more than 200 researchers across 30+ active research projects in Life Sciences, Engineering, Mathematics, Astrophysics, and Computing departments.

Read More »

Read More »

OCF achieves Elite Partner status with NVIDIA

OCF has successfully achieved Elite Partner status with NVIDIA® for Accelerated Computing, becoming only the second business partner in Northern Europe to achieve this level.

Awarded in recognition of OCF’s ability and competency to integrate a wide portfolio of NVIDIA’s Accelerated Computing products including TESLA® P100 and DGX-1™, the Elite Partner level is only awarded to partners that have the knowledge and skills to support the integration of GPUs, as well as the industry reach to support and attract the right companies and customers using accelerators.

“For customers using GPUs, or potential customers, earning this specialty ‘underwrites’ our service and gives them extra confidence that we possess the skills and knowledge to deliver the processing power to support their businesses,” says Steve Reynolds, Sales Director, OCF plc. “This award complements OCF’s portfolio of partner accreditations and demonstrates our commitment to the vendor.”

Read More »

Read More »

OCF deliver new 600 Teraflop HPC machine for University of Bristol

For over a decade the University of Bristol has been contributing to world-leading and life changing scientific research using High Performance Computing (HPC), having invested over £16 million in HPC and research data storage. To continue meeting the needs of its researchers working with complex and large amounts of data, they will now benefit from a new HPC machine, named BlueCrystal 4 (BC4).

Designed, integrated and configured by the HPC, storage and data analytics integrator OCF, BC4 has more than 15,000 cores making it the largest UK University system by core count and a theoretical peak performance of 600 Teraflops.

Over 1,000 researchers in areas such as paleobiology, earth science, biochemistry, mathematics, physics, molecular modelling, life sciences, and aerospace engineering will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body.

“We have researchers looking at whole-planet modelling with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from,” said Dr Christopher Woods, EPSRC Research Software Engineer Fellow, University of Bristol. “Early benchmarking is showing that the new system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. That’s a massive improvement that’ll be a great benefit to research at the University.”

BC4 uses Lenovo NeXtScale compute nodes, each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two NVIDIA Pascal P100 GPUs plus one GPU login node, designed into the rack by Lenovo’s engineering team to meet the specific requirements of the University.

Connecting the cluster are several high-speed networks, the fastest of which is a two-level Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM.

Effective benchmarking and optimisation, using the benchmarking capabilities of Lenovo’s HPC research centre in Stuttgart, the first of its kind, has ensured that BC4 is highly efficient in terms of physical footprint, while fully utilising the 30KW per rack energy limit. Lenovo’s commitment to third party integration has allowed the University to avoid vendor lock-in while permitting new hardware to be added easily between refresh cycles.

Dr Christopher Woods continues: “To help with the interactive use of the cluster, BC4 has a visualisation node equipped with NVIDIA Grid vGPUs so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.”

Housed at VIRTUS’ LONDON4, the UK’s first shared data centre for research and education in Slough, BC4 is the first of the University’s supercomputers to be held at an independent facility. The system is directly connected to the Bristol campus via JISC’s high speed Janet network. Kelly Scott, account director, education at VIRTUS Data Centres said, “LONDON4 is specifically designed to have the capacity to host ultra high density infrastructure and high performance computing platforms, so an ideal environment for systems like BC4. The University of Bristol is the 22nd organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.”

Currently numbering in the hundreds, applications running on the University’s previous cluster will be replicated onto the new system, which will allow researchers to create more applications and better scaling software. Applications are able to be moved directly onto BC4 without the need for re-engineering.

“We’re now in our tenth year of using HPC in our facility. We’ve endeavoured to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,” commented Caroline Gardiner, Academic Research Facilitator at the University of Bristol.

Simon Burbidge, Director of Advanced Computing comments: “It is with great excitement that I take on the role of Director of Advanced Computing at this time, and I look forward to enabling the University’s ambitious research programmes through the provision of the latest computational techniques and simulations.”

Due to be launched at an event on 24th May at the University of Bristol, BC4 will house over 1,000 system users, carried over from BlueCrystal Phase 3.

Read More »

Supporting scientific research at the Atomic Weapons Establishment

AWE benefiting from new end-to-end IBM Spectrum Scale and POWER8 systems

We are pleased to announce that we are supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC, cluster and separate big data storage system.

AWE has been synonymous with science, engineering and technology excellence in support of the UK’s nuclear deterrent for more than 60 years. AWE, working to the required Ministry of Defence programme, provides and maintains warheads for the Trident nuclear deterrent.

The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.

“Our work to maintain and support the Trident missile system is undertaken without actual nuclear testing, which has been the case ever since the UK became a signatory to the Comprehensive Nuclear Test Ban Treaty (CTBT); this creates extraordinary scientific and technical challenges – something we’re tackling head on with OCF,” comments Paul Tomlinson, HPC Operations at AWE. “We rely on cutting-edge science and computational methodologies to verify the safety and effectiveness of the warhead stockpile without conducting live testing. The new HPC system will be vital in this ongoing research.”

From the initial design and concept to manufacture and assembly, AWE works across the entire life cycle of warheads through the in-service support to decommissioning and disposal, ensuring the maximum safety and protecting national security at all times.

The central data storage, Cedar 3, will be in use for scientists across the AWE campus, with data replicated across the site.

“The work of AWE is of national importance and so its team of scientists need complete faith and trust in the HPC and big data systems in use behind the scenes, and the people deploying the technology,” says Julian Fielden, managing director, OCF. “Through our partnership with IBM, and the people, skills and expertise of our own team, we have been able to deliver a system which will enable AWE maintain its vital research,”

The new HPC system runs on a suite of IBM POWER8 processor-based Power systems servers running the IBM AIX V7.1 and Red Hat Enterprise Linux operating system. The HPC platform consists of IBM Power E880, IBM Power S824L, IBM Power S812L and IBM Power S822 servers to provide ample processing capability to support all of AWE’s computational needs and an IBM tape library device to back up computation data.

Cedar 3, AWE’s parallel file system storage, is an IBM Storwize storage system. IBM Spectrum Scale is in use to enable AWE to more easily manage data access amongst multiple servers.

About the Atomic Weapons Establishment (AWE)
The Atomic Weapons Establishment has been central to the defence of the United Kingdom for more than 60 years through its provision and maintenance of the warheads for the country’s nuclear deterrent. This encompasses the initial concept, assessment and design of the nuclear warheads, through component manufacture and assembly, in-service support, decommissioning and then disposal.

Around 4,500 staff are employed at the AWE sites together with over 2,000 contractors. The workforce consists of scientists, engineers, technicians, crafts-people and safety specialists, as well as business and administrative experts – many of whom are leaders in their field. The AWE sites and facilities are government owned but the UK Ministry of Defence (MOD) has a government-owned contractor-operated contract with AWE Management Limited (AWE ML) to manage the day-to-day operations and maintenance of the UK’s nuclear stockpile. AWE ML is formed of three shareholders – Lockheed Martin, Serco and Jacobs Engineering Group. For further information, visit: http://www.awe.co.uk

Read More »

eMedLab Shortlisted for UK Cloud Award

UK Cloud Awards Finalist logo

Congratulations eMedLab on being shortlisted for the UK Cloud Awards 2017

A solution designed and integrated by OCF has been shortlisted in the 2017 UK Cloud Awards in the ‘Best Public Sector Project’ category.

The MRC eMedLab consortium consists of University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute, the EMBL European Bioinformatics Institute and King’s College London and was funded by the Medical Research Council (£8.9M).

The vision of MRC eMedLab is to maximise the gains for patients and for medical research that will come from the explosion in human health data. To realise this potential, the consortium of seven prestigious biomedical research organisations need to accumulate medical and biological data on an unprecedented scale and complexity, to coordinate it, to store it safely and securely, and to make it readily available to interested researchers.

The partnership’s aim was to build a private cloud infrastructure for the delivery of significant computing capacity and storage to support the analysis of biomedical genomics, imaging and clinical data.  Initially, its main focus was on a range of diseases such as cancer, cardiovascular and rare diseases, subsequently broadening it out to include neurodegenerative and infectious diseases.

The MRC eMedLab system is a private cloud with significant data storage capacity and very fast internal networking designed specifically for the types of computing jobs used in biomedical research. The new high-performance and big data environment consists of:

  • Red Hat Enterprise Linux OpenStack Platform
  • Red Hat Satellite
  • Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
  • Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and previous versions backup.

The project has become a key infrastructure resource for the Medical Research Council (MRC), which has funded six of these projects. The success has been attributed to MRC eMedLab’s concept of partnership working where everybody is using one shared resource. This means not just sharing the HPC resource and sharing it efficiently, but also sharing the learning, the technology and the science at MRC eMedLab. Jacky Pallas, Director of Research Platforms, UCL, comments,From the beginning there was an excellent partnership between the MRC eMedLab operations team and the technical specialists at OCF, working together to solve the issues which inevitably arise when building and testing a novel compute and data storage system.”

In total, there are over 20 different projects running on the MRC eMedLab infrastructure which include:

  • The London School of Hygiene & Tropical Medicine is working on a project looking at population levels and the prevalence of HIV and TB, how the pathogen/bacteria evolve and the genetics of human resistance. This research is done in collaboration with researchers in Africa and Vietnam
  • Francis Crick Institute cancer based science – supporting a project run by Professor Charles Swanton investigating personalised immunotherapies against tumours
  • Great Ormond Street Hospital – collaboration on research on rare diseases in children
  • Linking genomics and brain imaging to better understand dementia
  • Studying rare mitochondrial diseases and understanding how stem cells function
  • Projects using the computing infrastructure use UK Biobank data to identify and improve treatments for cardiovascular diseases
  • Deep mining of cancer genomics data to understand how cancer tumours evolve
  • Analysing or looking at virus genome sequences to enable the modelling and monitoring of infectious flu type epidemic

The MRC eMedLab private cloud has shown that these new computing technologies can be used effectively to support research in the life sciences sector.

Professor Taane Clark, Professor of Genomics and Global Health, London School of Hygiene and Tropical Medicine comments, “The processing power of the MRC eMedLab computing resource has improved our ability to analyse human and pathogen genomic data, and is assisting us with providing insights into infectious disease genomics, especially in malaria host susceptibility, tuberculosis drug resistance and determining host-pathogen interactions.”

Read More »

Virtual HPC Clusters Enable Cancer, Cardio-Vascular and Rare Diseases Research OpenStack based Cloud enables cost-effective self-provisioned compute resources

eMedLab, a partnership of seven leading bioinformatics research and academic institutions, is using a new private cloud, HPC environment and big data system to support the efforts of hundreds of researchers studying cancers, cardio-vascular and rare diseases. Their research focuses on understanding the causes of these diseases and how a person’s genetics may influence their predisposition to the disease and potential treatment responses.

The new HPC cloud environment combines a Red Hat Enterprise Linux OpenStack Platform with Lenovo Flex System hardware to enable the creation of virtual HPC clusters bespoke to individual researchers’ requirements. The system has been designed, integrated and configured by OCF, an HPC, big data and predictive analytics provider, working closely with its partners Red Hat, Lenovo, Mellanox Technologies and in collaboration with eMedlab’s research technologists.

The High Performance Computing environment is being hosted at a shared data centre for education and research, offered by digital technologies charity Jisc. The data centre has the capacity, technological capability and flexibility to future-proof and support all of eMedLab’s HPC needs, with its ability to accommodate multiple and varied research projects concurrently in a highly collaborative environment. The ground-breaking facility is focused on the needs of the biomedical community and will revolutionise the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, very data intensive discipline,” says Jacky Pallas, Director of Research Platforms, University College London. “We want to study a lot of de-identified, anonymous human data. It’s not practical – from data transfer and data storage perspectives – to have scientists replicating the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”
The Red Hat Enterprise Linux OpenStack Platform, a highly scalable Infrastructure-as-a-Service [IaaS] solution, enables scientists to create and use virtual clusters bespoke to their needs, allowing them to select compute memory, processors, networking, storage and archiving policies, all orchestrated by a simple web-based user-Interface. Researchers will be able access up to 6,000 cores of processing power.

“We generate such large quantities of data that it can take weeks to transfer data from one site to another,” says Tim Cutts, Head of Scientific Computing, the Wellcome Trust Sanger Institute. “Data in eMedLab will stay in one secure place and researchers will be able to dynamically create their own virtual HPC cluster to run their software and algorithms to interrogate the data, choosing the number of cores, operating system and other attributes to create the ideal cluster for their research.”

Tim adds: “The Red Hat Enterprise Linux OpenStack Platform enables our researchers to do this rapidly and using open standards which can be shared with the community.”

Arif Ali, Technical Director of OCF says: “The private cloud HPC environment offers a flexible solution through which virtual clusters can be deployed for specific workloads. The multi-tenancy features of the Red Hat platform enable different institutions and research groups to securely co-exist on the same hardware, and share data when appropriate.”

“This is a tremendous and important win for Red Hat,” says Radhesh Balakrishnan, general manager, OpenStack, Red Hat. “eMedLab’s deployment of Red Hat Enterprise Linux OpenStack Platform into its HPC environment for this data intensive project further highlights our leadership in this space and ability to deliver a fully supported, stable, and reliable production-ready OpenStack solution.

Red Hat technology allows consortia such as eMedLab to use cutting edge self-service compute, storage, networking, and other new services as these are adopted as core OpenStack technologies, while still offering the world class service and support that Red Hat is renowned for. The use of Red Hat Enterprise Linux OpenStack Platform provides cutting edge technologies along with enterprise-grade support and services; leaving researchers to focus on the research and other medical challenges.”

“Mellanox end-to-end Ethernet solutions enable cloud infrastructures to optimize their performance and to accelerate big data analytics,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “Intelligent interconnect with offloading technologies, such as RDMA and cloud accelerations, is key for building the most efficient private and cloud environments. The collaboration between the organisations as part of this project demonstrates the power of the eco-systems to drive research and discovery forward.”
The new high-performance environment and big data environment consists of:
 Red Hat Enterprise Linux OpenStack Platform
 Red Hat Satellite
 Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
 Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and previous versions backup.

Read More »

Recent Comments

    Contact Us

    HEAD OFFICE:
    OCF plc
    Unit 5 Rotunda, Business Centre,
    Thorncliffe Park, Chapeltown,
    Sheffield, S35 2PG

    Tel: +44 (0)114 257 2200
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    SUPPORT DETAILS:
    OCF Hotline: 0845 702 3829
    E-Mail: support@ocf.co.uk
    Helpdesk: support.ocf.co.uk

    DARESBURY OFFICE:
    The Innovation Centre, Sci-Tech Daresbury,
    Keckwick Lane, Daresbury,
    Cheshire, WA4 4FS

    Tel: +44 (0)1925 607 360
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    OCF plc is a company registered in England and Wales. Registered number 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG

    Website Designed & Built by Grey Matter | web design sheffield