Next Generation Compute
Graphcore was founded in 2016 with its headquarters in Bristol, UK. Graphcore has created a new kind of processor designed specifically for machine learning and AI workloads called the Intelligence Processing Unit (IPU). The IPU’s unique architecture lets AI researchers undertake entirely new types of work, not possible using current technologies, to drive the next advances in machine intelligence. The IPU is going to be transformative across all industries whether you are a medical researcher, roboticist or building autonomous cars.
OCF partnered with Graphcore in September 2020 to join the IPU journey in the machine learning and artificial intelligence space. OCF can supply Graphcore systems and also a develop complete end-to-end AI solutions centered around Graphcore technology by combining IPU’s with software, networking and storage from OCF’s extensive partner ecosystem.
The IPU-M2000 is a revolutionary next-generation system solution built with the Colossus MK2 IPU. It packs 1 PetaFlop of AI compute and up to 450GB Exchange-Memory™ in a slim 1U blade for the most demanding machine intelligence workloads.
The IPU-M2000 has a flexible, modular design, so you can start with one and scale to thousands. With the IPU-M2000 you can directly connect a single system to an existing CPU server, add up to eight connected IPU-M2000s or with racks of 16 tightly interconnected IPU-M2000s in IPU-POD64 systems, grow to supercomputing scale thanks to the high-bandwidth, near-zero latency IPU-Fabric™ interconnect architecture built into the box.
IPU-POD16 DA (Direct Attach) is the ideal platform for exploration, innovation and development. This lets AI teams make new breakthroughs in machine intelligence. Four IPU-M2000s, supported by a host server, deliver a powerful 4 petaFlops of AI compute for both training and inference workloads in an affordable, compact 5U system.
IPU-POD16 DA is designed to get you up and running in no time. A turnkey system, featuring IPU-M2000s directly attached to an approved host server ready for installation in your datacenter. Extensive documentation and support is provided both by AI experts at OCF and Graphcore.
IPU-POD64 is Graphcore's unique solution for massive, disaggregated scale-out enabling high-performance machine intelligence compute to supercomputing scale. The IPU-POD64 builds upon the innovative IPU-M2000 and offers seamless scale-out up to 64,000 IPUs working as one integral whole or as independent subdivided partitions to handle multiple workloads and different users.
The IPU-POD64 has 16 IPU-M2000s in a standard rack. IPU-PODs communicate with near-zero latency using our unique IPU-Fabric™ interconnect architecture. IPU-Fabric has been specifically designed to eliminate communication bottlenecks and allow thousands of IPUs to operate on machine intelligence workloads as a single, high-performance and ultra-fast cohesive unit.
Graphcloud is a secure, cloud-based, commercial machine-learning (ML) platform running on Graphcore MK2 IPU-POD systems hosted by Cirrascale in partnership with Graphcore and available to customers worldwide.
Graphcloud is a MK2 IPU-POD scale-out cluster, offering a simple way to add state of the art machine intelligence compute on demand, without the need for on-premise hardware deployment.
Graphcloud is ideal as you scale from experimentation and proof of concept projects to pilots and production systems.
The Poplar SDK is a complete software stack, which was co-designed from scratch with the IPU, to implement our graph toolchain in an easy to use and flexible software development environment.
At a high level, Poplar is fully integrated with standard machine learning frameworks so developers can port existing models easily, and get up and running out-of-the-box with new applications in a familiar environment.
Developers who want full control to exploit maximum performance from the IPU, Poplar enables direct IPU programming in Python and C++.