Telephone: 0114 2572 200
Email: info@ocf.co.uk

OCF deliver new 600 Teraflop HPC machine for University of Bristol

For over a decade the University of Bristol has been contributing to world-leading and life changing scientific research using High Performance Computing (HPC), having invested over £16 million in HPC and research data storage. To continue meeting the needs of its researchers working with complex and large amounts of data, they will now benefit from a new HPC machine, named BlueCrystal 4 (BC4).

Designed, integrated and configured by the HPC, storage and data analytics integrator OCF, BC4 has more than 15,000 cores making it the largest UK University system by core count and a theoretical peak performance of 600 Teraflops.

Over 1,000 researchers in areas such as paleobiology, earth science, biochemistry, mathematics, physics, molecular modelling, life sciences, and aerospace engineering will be taking advantage of the new system. BC4 is already aiding research into new medicines and drug absorption by the human body.

“We have researchers looking at whole-planet modelling with the aim of trying to understand the earth’s climate, climate change and how that’s going to evolve, as well as others looking at rotary blade design for helicopters, the mutation of genes, the spread of disease and where diseases come from,” said Dr Christopher Woods, EPSRC Research Software Engineer Fellow, University of Bristol. “Early benchmarking is showing that the new system is three times faster than our previous cluster – research that used to take a month now takes a week, and what took a week now only takes a few hours. That’s a massive improvement that’ll be a great benefit to research at the University.”

BC4 uses Lenovo NeXtScale compute nodes, each comprising of two 14 core 2.4 GHz Intel Broadwell CPUs with 128 GiB of RAM. It also includes 32 nodes of two NVIDIA Pascal P100 GPUs plus one GPU login node, designed into the rack by Lenovo’s engineering team to meet the specific requirements of the University.

Connecting the cluster are several high-speed networks, the fastest of which is a two-level Intel Omni-Path Architecture network running at 100Gb/s. BC4’s storage is composed of one PetaByte of disk provided by DDN’s GS7k and IME systems running the parallel file system Spectrum Scale from IBM.

Effective benchmarking and optimisation, using the benchmarking capabilities of Lenovo’s HPC research centre in Stuttgart, the first of its kind, has ensured that BC4 is highly efficient in terms of physical footprint, while fully utilising the 30KW per rack energy limit. Lenovo’s commitment to third party integration has allowed the University to avoid vendor lock-in while permitting new hardware to be added easily between refresh cycles.

Dr Christopher Woods continues: “To help with the interactive use of the cluster, BC4 has a visualisation node equipped with NVIDIA Grid vGPUs so it helps our scientists to visualise the work they’re doing, so researchers can use the system even if they’ve not used an HPC machine before.”

Housed at VIRTUS’ LONDON4, the UK’s first shared data centre for research and education in Slough, BC4 is the first of the University’s supercomputers to be held at an independent facility. The system is directly connected to the Bristol campus via JISC’s high speed Janet network. Kelly Scott, account director, education at VIRTUS Data Centres said, “LONDON4 is specifically designed to have the capacity to host ultra high density infrastructure and high performance computing platforms, so an ideal environment for systems like BC4. The University of Bristol is the 22nd organisation to join the JISC Shared Data Centre in our facility, which enables institutions to collaborate and share infrastructure resources to drive real innovation that advances meaningful research.”

Currently numbering in the hundreds, applications running on the University’s previous cluster will be replicated onto the new system, which will allow researchers to create more applications and better scaling software. Applications are able to be moved directly onto BC4 without the need for re-engineering.

“We’re now in our tenth year of using HPC in our facility. We’ve endeavoured to make each phase of BlueCrystal bigger and better than the last, embracing new technology for the benefit of our users and researchers,” commented Caroline Gardiner, Academic Research Facilitator at the University of Bristol.

Simon Burbidge, Director of Advanced Computing comments: “It is with great excitement that I take on the role of Director of Advanced Computing at this time, and I look forward to enabling the University’s ambitious research programmes through the provision of the latest computational techniques and simulations.”

Due to be launched at an event on 24th May at the University of Bristol, BC4 will house over 1,000 system users, carried over from BlueCrystal Phase 3.

Read More »

Supporting scientific research at the Atomic Weapons Establishment

AWE benefiting from new end-to-end IBM Spectrum Scale and POWER8 systems

We are pleased to announce that we are supporting scientific research at the UK Atomic Weapons Establishment (AWE), with the design, testing and implementation of a new HPC, cluster and separate big data storage system.

AWE has been synonymous with science, engineering and technology excellence in support of the UK’s nuclear deterrent for more than 60 years. AWE, working to the required Ministry of Defence programme, provides and maintains warheads for the Trident nuclear deterrent.

The new HPC system is built on IBM’s POWER8 architecture and a separate parallel file system, called Cedar 3, built on IBM Spectrum Scale. In early benchmark testing, Cedar 3 is operating 10 times faster than the previous high-performance storage system at AWE. Both server and storage systems use IBM Spectrum Protect for data backup and recovery.

“Our work to maintain and support the Trident missile system is undertaken without actual nuclear testing, which has been the case ever since the UK became a signatory to the Comprehensive Nuclear Test Ban Treaty (CTBT); this creates extraordinary scientific and technical challenges – something we’re tackling head on with OCF,” comments Paul Tomlinson, HPC Operations at AWE. “We rely on cutting-edge science and computational methodologies to verify the safety and effectiveness of the warhead stockpile without conducting live testing. The new HPC system will be vital in this ongoing research.”

From the initial design and concept to manufacture and assembly, AWE works across the entire life cycle of warheads through the in-service support to decommissioning and disposal, ensuring the maximum safety and protecting national security at all times.

The central data storage, Cedar 3, will be in use for scientists across the AWE campus, with data replicated across the site.

“The work of AWE is of national importance and so its team of scientists need complete faith and trust in the HPC and big data systems in use behind the scenes, and the people deploying the technology,” says Julian Fielden, managing director, OCF. “Through our partnership with IBM, and the people, skills and expertise of our own team, we have been able to deliver a system which will enable AWE maintain its vital research,”

The new HPC system runs on a suite of IBM POWER8 processor-based Power systems servers running the IBM AIX V7.1 and Red Hat Enterprise Linux operating system. The HPC platform consists of IBM Power E880, IBM Power S824L, IBM Power S812L and IBM Power S822 servers to provide ample processing capability to support all of AWE’s computational needs and an IBM tape library device to back up computation data.

Cedar 3, AWE’s parallel file system storage, is an IBM Storwize storage system. IBM Spectrum Scale is in use to enable AWE to more easily manage data access amongst multiple servers.

About the Atomic Weapons Establishment (AWE)
The Atomic Weapons Establishment has been central to the defence of the United Kingdom for more than 60 years through its provision and maintenance of the warheads for the country’s nuclear deterrent. This encompasses the initial concept, assessment and design of the nuclear warheads, through component manufacture and assembly, in-service support, decommissioning and then disposal.

Around 4,500 staff are employed at the AWE sites together with over 2,000 contractors. The workforce consists of scientists, engineers, technicians, crafts-people and safety specialists, as well as business and administrative experts – many of whom are leaders in their field. The AWE sites and facilities are government owned but the UK Ministry of Defence (MOD) has a government-owned contractor-operated contract with AWE Management Limited (AWE ML) to manage the day-to-day operations and maintenance of the UK’s nuclear stockpile. AWE ML is formed of three shareholders – Lockheed Martin, Serco and Jacobs Engineering Group. For further information, visit: http://www.awe.co.uk

Read More »

OCF announces Iceotope as liquid cooling partner

We are pleased to announce a new partnership with Iceotope to deliver innovative cooling for the research and academic market

Iceotope, the liquid cooling company, and OCF are now able to offer greater flexibility for high performance computing (HPC) users.

The partnership brings together Iceotope’s cutting-edge cooling technology and our expertise in the HPC market to deliver fast performance without sacrificing space, noise or cost.

A HPC system built around Iceotope’s liquid cooling technology significantly reduces infrastructure, fitting more IT into the same footprint and saving costly floor space. The removal of fans enables total silence and reduced latency, bringing both minimal disruption and maximum performance for our customers.

Steve Reynolds, sales director at OCF, comments: “Iceotope’s novel approach to liquid cooling allows us to deliver compute capability for customers with environments outside the traditional air-cooled data centre – for example, a factory shop floor or an office environment where standard servers are too noisy. Our partnership with Iceotope enables us to provide an alternative and innovative solution for our customers.”

Peter Hopton, founder and technology director at Iceotope, comments: “We’re very pleased to partner with a leading HPC provider such as OCF.

“As processors become hotter, there is major challenge tor traditional cooling technologies to keep up. Thanks to our innovative liquid cooling, a high performance system becomes optimised for today’s workloads, with the added benefit of both significant capex and opex reduction”

 

About Iceotope (www.iceotope.com)

Founded in 2005, Iceotope is a liquid cooling technology company specialising in flexible fan-less cooling solutions for High Performance Computing (HPC), Edge of Network, and Data Centre facilities. Based in Sheffield, UK, and backed by strategic venture capital, Iceotope’s corporate Investors include; OMBU Group and Aster Capital; a strategic investor representing Schneider Electric and Solvay.

Read More »

eMedLab Shortlisted for UK Cloud Award

UK Cloud Awards Finalist logo

Congratulations eMedLab on being shortlisted for the UK Cloud Awards 2017

A solution designed and integrated by OCF has been shortlisted in the 2017 UK Cloud Awards in the ‘Best Public Sector Project’ category.

The MRC eMedLab consortium consists of University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute, the EMBL European Bioinformatics Institute and King’s College London and was funded by the Medical Research Council (£8.9M).

The vision of MRC eMedLab is to maximise the gains for patients and for medical research that will come from the explosion in human health data. To realise this potential, the consortium of seven prestigious biomedical research organisations need to accumulate medical and biological data on an unprecedented scale and complexity, to coordinate it, to store it safely and securely, and to make it readily available to interested researchers.

The partnership’s aim was to build a private cloud infrastructure for the delivery of significant computing capacity and storage to support the analysis of biomedical genomics, imaging and clinical data.  Initially, its main focus was on a range of diseases such as cancer, cardiovascular and rare diseases, subsequently broadening it out to include neurodegenerative and infectious diseases.

The MRC eMedLab system is a private cloud with significant data storage capacity and very fast internal networking designed specifically for the types of computing jobs used in biomedical research. The new high-performance and big data environment consists of:

  • Red Hat Enterprise Linux OpenStack Platform
  • Red Hat Satellite
  • Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
  • Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and previous versions backup.

The project has become a key infrastructure resource for the Medical Research Council (MRC), which has funded six of these projects. The success has been attributed to MRC eMedLab’s concept of partnership working where everybody is using one shared resource. This means not just sharing the HPC resource and sharing it efficiently, but also sharing the learning, the technology and the science at MRC eMedLab. Jacky Pallas, Director of Research Platforms, UCL, comments,From the beginning there was an excellent partnership between the MRC eMedLab operations team and the technical specialists at OCF, working together to solve the issues which inevitably arise when building and testing a novel compute and data storage system.”

In total, there are over 20 different projects running on the MRC eMedLab infrastructure which include:

  • The London School of Hygiene & Tropical Medicine is working on a project looking at population levels and the prevalence of HIV and TB, how the pathogen/bacteria evolve and the genetics of human resistance. This research is done in collaboration with researchers in Africa and Vietnam
  • Francis Crick Institute cancer based science – supporting a project run by Professor Charles Swanton investigating personalised immunotherapies against tumours
  • Great Ormond Street Hospital – collaboration on research on rare diseases in children
  • Linking genomics and brain imaging to better understand dementia
  • Studying rare mitochondrial diseases and understanding how stem cells function
  • Projects using the computing infrastructure use UK Biobank data to identify and improve treatments for cardiovascular diseases
  • Deep mining of cancer genomics data to understand how cancer tumours evolve
  • Analysing or looking at virus genome sequences to enable the modelling and monitoring of infectious flu type epidemic

The MRC eMedLab private cloud has shown that these new computing technologies can be used effectively to support research in the life sciences sector.

Professor Taane Clark, Professor of Genomics and Global Health, London School of Hygiene and Tropical Medicine comments, “The processing power of the MRC eMedLab computing resource has improved our ability to analyse human and pathogen genomic data, and is assisting us with providing insights into infectious disease genomics, especially in malaria host susceptibility, tuberculosis drug resistance and determining host-pathogen interactions.”

Read More »

OCF achieves Red Hat Premier Partner for Cloud status

Red Hat Premier partner logo

OCF has achieved Premier Business Partner status with Red Hat

We are delighted to have successfully achieved Red Hat Premier Partner for Cloud status, becoming the first and only Premier Cloud partner in the UK with the ability to provide end to end Cloud services.

Red Hat Premier Cloud status demonstrates OCF’s commitment to Cloud Infrastructure technologies including OpenStack, Containerisation (OpenShift), and Cloud & Virtualised Infrastructure Management through CloudForms.

The level has been achieved through the completion of advanced technical, architecture and sales training and tests to further develop and demonstrate OCF’s expertise in this area.

The Premier level recognises OCF’s contribution to Red Hat and the Red Hat partner ecosystem and enables OCF to have the highest level of visibility at Red Hat and in the marketplace and access to the most competitive Red Hat pricing. The Cloud Infrastructure Specialist status means OCF gets priority when it comes to deploying Red Hat Cloud Solutions with our customers. Additional support and expertise are also available from Red Hat which can provide a rapid and in-depth response to any customer questions.

“We recognise that Red Hat’s Cloud products can offer excellent benefits to our customers. Being a Premier Cloud partner has enabled OCF’s sales and technical teams to develop their knowledge to provide customers with the most innovative solutions to meet their requirements,” says Mahesh Pancholi, business development manager, OCF plc. “This status complements OCF’s partner levels with other vendors and demonstrates our commitment to Red Hat.”

OCF started working in partnership with Red Hat in 2014. Since the partnership commenced installations have been completed at a number of customers including MRC eMedLab.

MRC eMedLab is a shared private-cloud infrastructure that provides efficient high throughput HPC facilities to the seven members of the eMedLab consortium; University College London, The Francis Crick Institute, Kings College London, London School of Hygiene and Tropical Medicine, Queen Mary University of London, Wellcome Trust Sanger Institute, and EMBL-EBI. The solution consists mostly of Lenovo (formerly IBM) Flex Systems and IBM  Spectrum Scale (formerly GPFS) Storage Servers and is controlled using Red Hat’s OpenStack Platform.

A wide range of medical research projects are supported by a private cloud environment, with the focus on building common tools and common ways of analysing and sharing the data. In total, there are over 20 different projects running on the MRC eMedLab infrastructure including a project looking at population levels and the prevalence of HIV and TB, how the pathogen/bacteria evolve and the genetics of human resistance, collaboration on research on rare diseases in children and linking genomics and brain imaging to better understand dementia.

Read More »

OCF accredited under SSSNA framework agreement

We’re pleased to announce that we are now accredited to supply and integrate server and storage hardware to UK academia through the IT purchasing framework agreement, Server, Storage and Solution National Agreement (SSSNA). SSSNA, an IT purchasing framework agreement, enables universities, higher education and further education colleges to purchase equipment without the need to go through individual EU tenders. This reduces procurement costs and time.

OCF has over 15 years’ experience of successfully designing and integrating high performance server and storage clusters into UK academia. Our customers include: University College London, King’s College London, Durham University and the University of Birmingham.

“The SSSNA accreditation is further reassurance to our current and prospective customers that OCF understands the requirements and intricacies of UK academia,” says Steve Reynolds, Sales Director, OCF. “SSSNA members will continue to benefit from the most competitive pricing on hardware in the education and research sector, in addition to being supported with our integration expertise. OCF is committed to this programme and that we have a dedicated framework team to help drive the programme into UK academia is testament to that.”

The new framework which started in Nov 2016 is set to last for 4 years and is worth between £200m and £600m. The framework represents higher education institutes across the south of England (SUPC), and is a leading procurement on behalf of a range of other buying groups including the Higher Education Purchasing Consortium for Wales (HEPCW), the London Universities Purchasing Consortium (LUPC), the North West Universities Purchasing Consortium (NWUPC), the North Eastern Universities Purchasing Consortium (NEUPC) and the Advanced Procurement for Universities and Colleges (APUC). All members of these regional consortia benefit from:

  • Consistently low pricing on servers and storage technologies
  • Ability to purchase under a pre-tendered framework agreement
  • Exclusive product offers and promotions
  • Experienced technical pre and post-sales consultancy
  • Fully comprehensive support and installation services

The framework is split into four main lots. OCF is listed on Lot 4 but is present with partners on all others:

Lot 1 – Servers (with Lenovo)
Lot 2 – Storage (with Lenovo)
Lot 3a – OEM Led Solutions – Converged, Hyper-Converged, Hybrid & Other (excluding HPC & DIC) (with Fujitsu, Hitachi and Lenovo)
Lot 3b – OEM Led Solutions (High Performance Computing (HPC) & Data Intensive Computing (DIC) (with Fujitsu and Lenovo)
Lot 4 – Reseller Led Solutions (OCF)

If you are interested in hearing more about how our frameworks can support your HPC and storage solutions please get in touch.

Read More »

Access the NVIDIA® DGX-1™: the World’s First Deep Learning Supercomputer in a Box

Get faster training, larger models, and more accurate results from deep learning with the NVIDIA® DGX-1™. This is the world’s first purpose-built system for deep learning and AI-accelerated analytics, with performance equal to 250 conventional servers. It comes fully integrated with hardware, deep learning software, development tools, and accelerated analytics applications. Immediately shorten data processing time, visualise more data, accelerate deep learning frameworks, and design more sophisticated neural networks.

Iterate and Innovate Faster
High-performance training accelerates your productivity giving you faster
insight and time to market.

Computing for Infinite Opportunities
The NVIDIA DGX-1 is the first system built with NVIDIA Pascal™-powered Tesla®
P100 accelerators. The NVIDIA NVLink™ implementation delivers a massive increase in GPU memory capacity, giving you a system that can learn, see, and simulate our world.

Analyze. Visualize. AI-Accelerate
The NVIDIA DGX-1 software stack includes major deep learning frameworks, the NVIDIA DIGITS™ GPU training system, the NVIDIA Deep Learning SDK (e.g. CuDNN, NCCL), NVIDIA Docker, GPU drivers, and NVIDIA CUDA® for rapidly designing deep neural networks (DNN). It’s the ideal stack for accelerating popular analytics and visualisation software. This powerful system includes access to cloud management services for container creation and deployment, system updates, and an application repository. This software, running on Pascal powered Tesla GPUs, lets applications run 12X faster than previous GPU-accelerated solutions.

Turn Data into Knowledge
The innovative NVIDIA DGX-1 system lets you uncover patterns in large data sets to reveal new knowledge and insights in hours or minutes.

Stay Ahead of the Competition
NVIDIA DGX-1 is engineered with groundbreaking technologies that deliver the fastest solutions for your deep learning training and AI-accelerated analytics workloads.

Maximise Your Investment
Hardware and software support gives you access to NVIDIA deep learning expertise and includes cloud management services, software upgrades and updates, and priority resolution of your critical issues.

Please get in touch with us to learn more.

Read More »

OCF Holiday Opening Times

Our opening hours will be a little different to usual over the Christmas holidays:

Friday 23rd December – Closed from 1pm

Monday 26th December – Closed
Tuesday 27th December – Closed
Wednesday 28th December – 10am-4pm
Thursday 29th December – 10am-4pm
Friday 30th December – 10am-4pm
Monday 2nd January 2017- Closed
Tuesday 3rd January – Usual opening hours

We wish all our customers and partners a very merry Christmas and a happy New Year!

Read More »

Experience the fastest application performance on the market

Be one of the first to get your hands on the IBM Power Systems S822LC.

Let your research outshine the rest and advance your status to premier research institution:

  • Progress and publish research findings earlier
  • Perform simulations in greater detail
  • Discover new insights

Improve your application performance with 5 times faster data exchange. NVLink on POWER8 brings you the unique benefit of GPU to GPU and CPU to GPU communications, reducing PCIe bottleneck and delivering up to 2.5 times improved performance.

For further details and buy your IBM Power Systems S822LC box, contact us info@ocf.co.uk.

Read More »

OCF acquires Intense Computing to strengthen analytics expertise

22nd November, HPC, storage and data analytics integrator, OCF, has acquired a majority share of Intense Computing Ltd, a provider of data and analytics consultancy and solutions. Intense Computing will be a subsidiary company of OCF Plc and is being renamed OCF DATA Limited, strengthening and enhancing OCF’s data and analytics service for customers.

The growth in data and analytics together with the Internet of Things (IoT) is estimated to add £322bn to the UK economy by 2020, in addition to creating 182,000 new jobs, according to research carried out by the Centre for Economics and Business Research (CEBR). Its report was based on 409 interviews with senior UK decision makers, as well as official government data.

OCF DATA Limited is led by Prof Cliff Brereton (co-founder Intense Computing Limited), former Director of the Hartree Centre, the World’s largest high performance and data analytics centre devoted to delivering industry-focused solutions. OCF DATA Limited develops transformative solutions using data and analytics for customers, specifically in manufacturing, health, education, construction and Government.

“By acquiring Intense Computing Limited, overnight we are strengthening our expertise and capability to deliver transformative analytics solutions for customers,” says Julian Fielden, managing director, OCF. “The new OCF DATA Limited is now boosting our knowledge in the delivery of Business Intelligence solutions, the Internet of Things and Artificial Intelligence, enabling us to better support those customers that increasingly look for innovative solutions and insights using data and analytics.”

“We are excited that new customers are transforming their businesses by adopting Data and Analytics solutions and services designed and delivered by OCF DATA using advanced and market leading analytics technologies from SAS”, Ian Cosnett, SAS.

Since its founding in 2015, Intense Computing has earned its reputation delivering transformative solutions which deliver real ROI from analytics deployments. The acquisition is enabling OCF to deliver Intense Computing’s IP and methods more widely to customers using a new and developing sales and technical team within OCF DATA Limited as well as the existing talent within OCF plc.

“OCF Data has recently delivered a fully hosted data analytics solution to Integrated Radiological Services, built on SAS software. This has significantly reduced our workload in processing clients’ data, allowing our staff to divert their time to using the powerful analytics tools for greater insight. This solution allows IRS to deliver an enhanced service to our customers and will allow us to further develop our services using advanced data analytics”, Mike Moores, Managing Director, Integrated Radiological Services Ltd.

Read More »

Recent Comments

    May 2017
    Monday Tuesday Wednesday Thursday Friday Saturday Sunday
    1st May 2017 2nd May 2017 3rd May 2017 4th May 2017 5th May 2017 6th May 2017 7th May 2017
    8th May 2017 9th May 2017 10th May 2017 11th May 2017 12th May 2017 13th May 2017 14th May 2017
    15th May 2017 16th May 2017 17th May 2017 18th May 2017 19th May 2017 20th May 2017 21st May 2017
    22nd May 2017 23rd May 2017 24th May 2017 25th May 2017 26th May 2017 27th May 2017 28th May 2017
    29th May 2017 30th May 2017 31st May 2017 1st June 2017 2nd June 2017 3rd June 2017 4th June 2017

    Contact Us

    HEAD OFFICE:
    OCF plc
    Unit 5 Rotunda, Business Centre,
    Thorncliffe Park, Chapeltown,
    Sheffield, S35 2PG

    Tel: +44 (0)114 257 2200
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    SUPPORT DETAILS:
    OCF Hotline: 0845 702 3829
    E-Mail: support@ocf.co.uk
    Helpdesk: support.ocf.co.uk

    DARESBURY OFFICE:
    The Innovation Centre, Sci-Tech Daresbury,
    Keckwick Lane, Daresbury,
    Cheshire, WA4 4FS

    Tel: +44 (0)1925 607 360
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    OCF plc is a company registered in England and Wales. Registered number 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG

    Website Designed & Built by Grey Matter | web design sheffield