Telephone: 0114 2572 200
Email: info@ocf.co.uk

OCF accredited under SSSNA framework agreement

We’re pleased to announce that we are now accredited to supply and integrate server and storage hardware to UK academia through the IT purchasing framework agreement, Server, Storage and Solution National Agreement (SSSNA). SSSNA, an IT purchasing framework agreement, enables universities, higher education and further education colleges to purchase equipment without the need to go through individual EU tenders. This reduces procurement costs and time.

OCF has over 15 years’ experience of successfully designing and integrating high performance server and storage clusters into UK academia. Our customers include: University College London, King’s College London, Durham University and the University of Birmingham.

“The SSSNA accreditation is further reassurance to our current and prospective customers that OCF understands the requirements and intricacies of UK academia,” says Steve Reynolds, Sales Director, OCF. “SSSNA members will continue to benefit from the most competitive pricing on hardware in the education and research sector, in addition to being supported with our integration expertise. OCF is committed to this programme and that we have a dedicated framework team to help drive the programme into UK academia is testament to that.”

The new framework which started in Nov 2016 is set to last for 4 years and is worth between £200m and £600m. The framework represents higher education institutes across the south of England (SUPC), and is a leading procurement on behalf of a range of other buying groups including the Higher Education Purchasing Consortium for Wales (HEPCW), the London Universities Purchasing Consortium (LUPC), the North West Universities Purchasing Consortium (NWUPC), the North Eastern Universities Purchasing Consortium (NEUPC) and the Advanced Procurement for Universities and Colleges (APUC). All members of these regional consortia benefit from:

  • Consistently low pricing on servers and storage technologies
  • Ability to purchase under a pre-tendered framework agreement
  • Exclusive product offers and promotions
  • Experienced technical pre and post-sales consultancy
  • Fully comprehensive support and installation services

The framework is split into four main lots. OCF is listed on Lot 4 but is present with partners on all others:

Lot 1 – Servers (with Lenovo)
Lot 2 – Storage (with Lenovo)
Lot 3a – OEM Led Solutions – Converged, Hyper-Converged, Hybrid & Other (excluding HPC & DIC) (with Fujitsu, Hitachi and Lenovo)
Lot 3b – OEM Led Solutions (High Performance Computing (HPC) & Data Intensive Computing (DIC) (with Fujitsu and Lenovo)
Lot 4 – Reseller Led Solutions (OCF)

If you are interested in hearing more about how our frameworks can support your HPC and storage solutions please get in touch.

Read More »

Access the NVIDIA® DGX-1™: the World’s First Deep Learning Supercomputer in a Box

Get faster training, larger models, and more accurate results from deep learning with the NVIDIA® DGX-1™. This is the world’s first purpose-built system for deep learning and AI-accelerated analytics, with performance equal to 250 conventional servers. It comes fully integrated with hardware, deep learning software, development tools, and accelerated analytics applications. Immediately shorten data processing time, visualise more data, accelerate deep learning frameworks, and design more sophisticated neural networks.

Iterate and Innovate Faster
High-performance training accelerates your productivity giving you faster
insight and time to market.

Computing for Infinite Opportunities
The NVIDIA DGX-1 is the first system built with NVIDIA Pascal™-powered Tesla®
P100 accelerators. The NVIDIA NVLink™ implementation delivers a massive increase in GPU memory capacity, giving you a system that can learn, see, and simulate our world.

Analyze. Visualize. AI-Accelerate
The NVIDIA DGX-1 software stack includes major deep learning frameworks, the NVIDIA DIGITS™ GPU training system, the NVIDIA Deep Learning SDK (e.g. CuDNN, NCCL), NVIDIA Docker, GPU drivers, and NVIDIA CUDA® for rapidly designing deep neural networks (DNN). It’s the ideal stack for accelerating popular analytics and visualisation software. This powerful system includes access to cloud management services for container creation and deployment, system updates, and an application repository. This software, running on Pascal powered Tesla GPUs, lets applications run 12X faster than previous GPU-accelerated solutions.

Turn Data into Knowledge
The innovative NVIDIA DGX-1 system lets you uncover patterns in large data sets to reveal new knowledge and insights in hours or minutes.

Stay Ahead of the Competition
NVIDIA DGX-1 is engineered with groundbreaking technologies that deliver the fastest solutions for your deep learning training and AI-accelerated analytics workloads.

Maximise Your Investment
Hardware and software support gives you access to NVIDIA deep learning expertise and includes cloud management services, software upgrades and updates, and priority resolution of your critical issues.

Please get in touch with us to learn more.

Read More »

OCF Holiday Opening Times

Our opening hours will be a little different to usual over the Christmas holidays:

Friday 23rd December – Closed from 1pm

Monday 26th December – Closed
Tuesday 27th December – Closed
Wednesday 28th December – 10am-4pm
Thursday 29th December – 10am-4pm
Friday 30th December – 10am-4pm
Monday 2nd January 2017- Closed
Tuesday 3rd January – Usual opening hours

We wish all our customers and partners a very merry Christmas and a happy New Year!

Read More »

Experience the fastest application performance on the market

Be one of the first to get your hands on the IBM Power Systems S822LC.

Let your research outshine the rest and advance your status to premier research institution:

  • Progress and publish research findings earlier
  • Perform simulations in greater detail
  • Discover new insights

Improve your application performance with 5 times faster data exchange. NVLink on POWER8 brings you the unique benefit of GPU to GPU and CPU to GPU communications, reducing PCIe bottleneck and delivering up to 2.5 times improved performance.

For further details and buy your IBM Power Systems S822LC box, contact us info@ocf.co.uk.

Read More »

OCF acquires Intense Computing to strengthen analytics expertise

22nd November, HPC, storage and data analytics integrator, OCF, has acquired a majority share of Intense Computing Ltd, a provider of data and analytics consultancy and solutions. Intense Computing will be a subsidiary company of OCF Plc and is being renamed OCF DATA Limited, strengthening and enhancing OCF’s data and analytics service for customers.

The growth in data and analytics together with the Internet of Things (IoT) is estimated to add £322bn to the UK economy by 2020, in addition to creating 182,000 new jobs, according to research carried out by the Centre for Economics and Business Research (CEBR). Its report was based on 409 interviews with senior UK decision makers, as well as official government data.

OCF DATA Limited is led by Prof Cliff Brereton (co-founder Intense Computing Limited), former Director of the Hartree Centre, the World’s largest high performance and data analytics centre devoted to delivering industry-focused solutions. OCF DATA Limited develops transformative solutions using data and analytics for customers, specifically in manufacturing, health, education, construction and Government.

“By acquiring Intense Computing Limited, overnight we are strengthening our expertise and capability to deliver transformative analytics solutions for customers,” says Julian Fielden, managing director, OCF. “The new OCF DATA Limited is now boosting our knowledge in the delivery of Business Intelligence solutions, the Internet of Things and Artificial Intelligence, enabling us to better support those customers that increasingly look for innovative solutions and insights using data and analytics.”

“We are excited that new customers are transforming their businesses by adopting Data and Analytics solutions and services designed and delivered by OCF DATA using advanced and market leading analytics technologies from SAS”, Ian Cosnett, SAS.

Since its founding in 2015, Intense Computing has earned its reputation delivering transformative solutions which deliver real ROI from analytics deployments. The acquisition is enabling OCF to deliver Intense Computing’s IP and methods more widely to customers using a new and developing sales and technical team within OCF DATA Limited as well as the existing talent within OCF plc.

“OCF Data has recently delivered a fully hosted data analytics solution to Integrated Radiological Services, built on SAS software. This has significantly reduced our workload in processing clients’ data, allowing our staff to divert their time to using the powerful analytics tools for greater insight. This solution allows IRS to deliver an enhanced service to our customers and will allow us to further develop our services using advanced data analytics”, Mike Moores, Managing Director, Integrated Radiological Services Ltd.

Read More »

University of Leeds students get stuck in to HPC

Two students from the University of Leeds have joined OCF as part of their summer internship to expand their experience in working in a High Performance Computing (HPC) environment.

Jamie Roberts and Joshua Crinall, both currently in their second year of a four-year Masters in Computer Science, are spending a week with the OCF engineering team where they are primarily working on the decommissioning of a large HPC system in one location to rebuild at a second site. The internship is part of a broader eight-week internship programme offered by the Advanced Research Computing group at the University which aims to provide practical knowledge of HPC as well as to show how HPC supports scientific computing.

OCF's on-site system administrator Ian McKenna (centre) is pictured with Joshua (left) and Jamie (right) inside the datacentre

OCF’s on-site system administrator Ian McKenna (centre) is pictured with Joshua (left) and Jamie (right) inside the datacentre

Jamie comments: “I am excited to gain experience of working with an experienced and qualified team who deliver bespoke HPC solutions. It is a great opportunity to gain hands-on HPC experience as well as understand the architecture of the various systems. It is interesting to learn how a certain architecture suits a certain set of customer business problems. The experience so far has been brilliant, everybody has been very supportive of our learning, and the working environment is very friendly.”

Joshua comments: “It’s really helped me appreciate what is takes to get a supercomputer up and running and I’ve thoroughly enjoyed the experience. HPC has always been an interesting concept to me; studying computer science, supercomputers always felt out of reach, but working with ARC and OCF has proven otherwise. I think the best thing for me is knowing my work is helping further research that would be impossible without HPC equipment. It really is the cutting edge of computing.”

Russell Slack, Operations Director said: “We are very pleased to have Jamie and Joshua as part of our engineering team. It allows us to provide valuable experience to enthusiastic students in an industry that is lacking such specific skills and knowledge.” Russell goes on to say, “I started my career with OCF as an apprentice 20 years ago and now I’m Operations Director here. It just shows that with hard work and dedication, HPC can be the industry for a successful career.”

Read More »

How OpenFOAM on POWER8 is stretching the performance envelope

Computational fluid dynamic simulations remain a powerful tool in a wide range of engineering and scientific disciplines, including aerospace, automotive, power generation, chemical manufacturing, medical research, and astrophysics.

If you are serious about achieving the right results using CFD, the IBM Power platform offers performance benefits up to 3x better than the x86 platform. We believe that OpenFOAM running on IBM’s POWER8 server offers the right combination of performance, reliability and cost-effectiveness, especially when combined with our unique integration service and support offering, which brings together the benefits of OpenFOAM with the high performance of POWER8, for faster, more reliable modelling.

Learn more at with our white paper and at www.openfoamonpower.co.uk

 

Read More »

OCF named in 48th position in the Investec Mid-Market 100 UK’s fastest-growing private companies

UK HPC integrator recognised as one of the fastest growing private companies

OCF plc, the UK’s trusted high performance computing, storage and data analytics provider, has entered the Investec UK’s 100 fastest-growing private Mid-Market companies after another successful year of trading.

The list aims to fill a gap in understanding of how the UK mid-market operates and track the market’s performance over time, demonstrating how it is possible to sustain meaningful growth in a business.

Julian Fielden, OCF Managing Director, comments: “It is a fantastic honour to be recognised. Our growth over recent years may be attributed to a number of factors; the main external driver has been the UK Government’s drive to support research by funding significant investments in HPC and big data infrastructure. A significant internal driver has been the great efforts that we have made to ensure a first class customer experience. We operate to ISO 9001 standards, our contracts are all controlled by mutually agreed Statements of Work and our services are delivered to agreed Service Level Agreements.

The quality of our people is one of our key differentiators. We make a continual effort to effectively train and motivate our employees. As long as we maintain our focus and technical excellence we have every confidence that our success will continue.”

Earlier in the year, OCF was recognised in the Northern Technology awards presented by GP Bullhound, winning awards in the Top 50 Fastest Growing Technology Companies in the North as well as Top 15 Fastest Growing Larger Technology Companies in the North.

Read More »

Virtual HPC Clusters Enable Cancer, Cardio-Vascular and Rare Diseases Research OpenStack based Cloud enables cost-effective self-provisioned compute resources

eMedLab, a partnership of seven leading bioinformatics research and academic institutions, is using a new private cloud, HPC environment and big data system to support the efforts of hundreds of researchers studying cancers, cardio-vascular and rare diseases. Their research focuses on understanding the causes of these diseases and how a person’s genetics may influence their predisposition to the disease and potential treatment responses.

The new HPC cloud environment combines a Red Hat Enterprise Linux OpenStack Platform with Lenovo Flex System hardware to enable the creation of virtual HPC clusters bespoke to individual researchers’ requirements. The system has been designed, integrated and configured by OCF, an HPC, big data and predictive analytics provider, working closely with its partners Red Hat, Lenovo, Mellanox Technologies and in collaboration with eMedlab’s research technologists.

The High Performance Computing environment is being hosted at a shared data centre for education and research, offered by digital technologies charity Jisc. The data centre has the capacity, technological capability and flexibility to future-proof and support all of eMedLab’s HPC needs, with its ability to accommodate multiple and varied research projects concurrently in a highly collaborative environment. The ground-breaking facility is focused on the needs of the biomedical community and will revolutionise the way data sets are shared between leading scientific institutions internationally.

The eMedLab partnership was formed in 2014 with funding from the Medical Research Council. Original members University College London, Queen Mary University of London, London School of Hygiene & Tropical Medicine, the Francis Crick Institute, the Wellcome Trust Sanger Institute and the EMBL European Bioinformatics Institute have been joined recently by King’s College London.

“Bioinformatics is a very, very data intensive discipline,” says Jacky Pallas, Director of Research Platforms, University College London. “We want to study a lot of de-identified, anonymous human data. It’s not practical – from data transfer and data storage perspectives – to have scientists replicating the same datasets across their own, separate physical HPC resources, so we’re creating a single store for up to 6 Petabytes of data and a shared HPC environment within which researchers can build their own virtual clusters to support their work.”
The Red Hat Enterprise Linux OpenStack Platform, a highly scalable Infrastructure-as-a-Service [IaaS] solution, enables scientists to create and use virtual clusters bespoke to their needs, allowing them to select compute memory, processors, networking, storage and archiving policies, all orchestrated by a simple web-based user-Interface. Researchers will be able access up to 6,000 cores of processing power.

“We generate such large quantities of data that it can take weeks to transfer data from one site to another,” says Tim Cutts, Head of Scientific Computing, the Wellcome Trust Sanger Institute. “Data in eMedLab will stay in one secure place and researchers will be able to dynamically create their own virtual HPC cluster to run their software and algorithms to interrogate the data, choosing the number of cores, operating system and other attributes to create the ideal cluster for their research.”

Tim adds: “The Red Hat Enterprise Linux OpenStack Platform enables our researchers to do this rapidly and using open standards which can be shared with the community.”

Arif Ali, Technical Director of OCF says: “The private cloud HPC environment offers a flexible solution through which virtual clusters can be deployed for specific workloads. The multi-tenancy features of the Red Hat platform enable different institutions and research groups to securely co-exist on the same hardware, and share data when appropriate.”

“This is a tremendous and important win for Red Hat,” says Radhesh Balakrishnan, general manager, OpenStack, Red Hat. “eMedLab’s deployment of Red Hat Enterprise Linux OpenStack Platform into its HPC environment for this data intensive project further highlights our leadership in this space and ability to deliver a fully supported, stable, and reliable production-ready OpenStack solution.

Red Hat technology allows consortia such as eMedLab to use cutting edge self-service compute, storage, networking, and other new services as these are adopted as core OpenStack technologies, while still offering the world class service and support that Red Hat is renowned for. The use of Red Hat Enterprise Linux OpenStack Platform provides cutting edge technologies along with enterprise-grade support and services; leaving researchers to focus on the research and other medical challenges.”

“Mellanox end-to-end Ethernet solutions enable cloud infrastructures to optimize their performance and to accelerate big data analytics,” said Kevin Deierling, vice president of marketing at Mellanox Technologies. “Intelligent interconnect with offloading technologies, such as RDMA and cloud accelerations, is key for building the most efficient private and cloud environments. The collaboration between the organisations as part of this project demonstrates the power of the eco-systems to drive research and discovery forward.”
The new high-performance environment and big data environment consists of:
 Red Hat Enterprise Linux OpenStack Platform
 Red Hat Satellite
 Lenovo System x Flex system with 252 hypervisor nodes and Mellanox 10Gb network with a 40Gb/56Gb core
 Five tiers of storage, managed by IBM Spectrum Scale (formerly GPFS), for cost effective data storage – scratch, Frequently Accessed Research Data, virtual clusters image storage, medium-term storage and previous versions backup.

Read More »

BlueBEAR at the University of Birmingham

A team of researchers based at The University of Birmingham is working on ground-breaking research to create a proton Computed Tomography (CT) image that will help to facilitate treatment of cancer patients in the UK. Proton therapy targets tumours very precisely using a proton beam and can cause less damage to surrounding tissue than conventional radiotherapy – for this reason it can be beneficial treatment for children.

Generally reliant on X-rays to image the body’s composition and healthy tissue location before treatment, this research is hoping to simulate use of actual protons – not X-rays – to image the body – and in doing so improve accuracy of the final treatment. It forms part of a larger research project set up to build a device capable of delivering protons in this way in a clinical setting.

Working for the PRaVDA Consortium, a three-year project funded by the Wellcome Trust* and led by researchers at the University of Lincoln, the team of researchers are using The University of Birmingham’s centrally funded High Performance Computing (HPC) service, BlueBEAR, to simulate the use of protons for CT imaging. The team hopes to simulate 1000 million protons per image over the course of the project, and will do so 97 per cent faster than on a desktop computer. A test simulation of 180 million protons, which would usually take 5400 hours without the cluster, has already been delivered in 72 hours / 3-days.

The research team is tasked with proving the principle that a 10cm proton CT image, similar in size to a child’s head, can be created. In doing so, it will be the largest proton CT image ever created.

Dr Tony Price, PRaVDA research fellow, says, “The research will give us a better understanding of how a proton beam interacts with the human body, ultimately improving the accuracy of proton therapy. The HPC service at The University of Birmingham is essential for us to complete our research, as it gives us the necessary capacity to simulate and record the necessary number of histories to create an image. It took us only three days to run a simulation of 180 million protons which would usually take 5400 hours without the cluster.”

The BlueBEAR HPC service in use by the PRaVDA Consortium was designed, built and integrated in 2013 by HPC, data management, storage and analytics provider OCF. Due to the stability and reliability of the core service, researchers have invested in expanding this service with nodes purchased from their own grants and integrated into the core service on the understanding that these nodes will be available for general use when not required by the research group.

This has expanded the capacity of the centrally-funded service by 33 per cent, showing the confidence that the researchers have in the core service. The service is used by researchers from the whole range of research areas at the University, from the traditional HPC users in the STEM (Science, Technology, Engineering and Mathematics) disciplines to non-traditional HPC users such as Psychology and Theology.

Paul Hatton, HPC & Visualisation Specialist, IT Services, The University of Birmingham says, “The HPC service built by OCF has proven over the past two years to be of immense value to a multitude of researchers at the University. Instead of buying small workstations, researchers are using our central HPC service because it is easy for them to buy and add their own cores when required.

We work closely with OCF to encourage new users onto the service and provide a framework for users requesting capacity. The flexible, scalable and unobtrusive design of the high performance clusters has made it easy for us to scale up our HPC service according to the increase in demand.”

Technology

  • The server clusters uses Lenovo System x iDataPlex® with Intel Sandy Bridge processors. OCF has installed more high performance server clusters using the industry-leading Lenovo iDataPlex server than any other UK integrator.
  • The server clusters also uses IBM Tivoli Storage Manager for data back up and IBM GPFS software which enables more effective storage capacity expansion, enterprise wide, interdepartmental file sharing, commercial-grade reliability, cost-effective disaster recovery and business continuity.
  • The scheduling system on BlueBEAR is Adaptive Computing’s MOAB software, which enables the scheduling, managing, monitoring, and reporting of HPC workloads.
  • Use of Mellanox’s Virtual Protocol Interconnect (VPI) cards within the cluster design would make it easier for IT Services to redeploy nodes between the various components of the BEAR services depending on changing workloads should the demand arise.

 

Read More »

Recent Comments

    February 2017
    Monday Tuesday Wednesday Thursday Friday Saturday Sunday
    30th January 2017 31st January 2017 1st February 2017 2nd February 2017 3rd February 2017 4th February 2017 5th February 2017
    6th February 2017 7th February 2017 8th February 2017 9th February 2017 10th February 2017 11th February 2017 12th February 2017
    13th February 2017 14th February 2017 15th February 2017 16th February 2017 17th February 2017 18th February 2017 19th February 2017
    20th February 2017 21st February 2017 22nd February 2017 23rd February 2017 24th February 2017 25th February 2017 26th February 2017
    27th February 2017 28th February 2017 1st March 2017 2nd March 2017 3rd March 2017 4th March 2017 5th March 2017

    Contact Us

    HEAD OFFICE:
    OCF plc
    Unit 5 Rotunda, Business Centre,
    Thorncliffe Park, Chapeltown,
    Sheffield, S35 2PG

    Tel: +44 (0)114 257 2200
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    SUPPORT DETAILS:
    OCF Hotline: 0845 702 3829
    E-Mail: support@ocf.co.uk
    Helpdesk: support.ocf.co.uk

    DARESBURY OFFICE:
    The Innovation Centre, Sci-Tech Daresbury,
    Keckwick Lane, Daresbury,
    Cheshire, WA4 4FS

    Tel: +44 (0)1925 607 360
    Fax: +44 (0)114 257 0022
    E-Mail: info@ocf.co.uk

    OCF plc is a company registered in England and Wales. Registered number 4132533. Registered office address: OCF plc, 5 Rotunda Business Centre, Thorncliffe Park, Chapeltown, Sheffield, S35 2PG

    Website Designed & Built by Grey Matter | web design sheffield