Many HPC Storage Challenges –
One Efficient Software Solution

What if the right software storage could achieve it all? Deliver the performance and scalability today’s HPC workloads require, eliminate data silos and save admins valuable time.

Smart Storage for Demanding HPC Workloads

Affordable performance and efficient management at scale

HPC continues to be the most demanding environment for storage. As a result, HPC facilities are ahead of the crowd when it comes to providing storage for a broad range of applications. And today’s HPC setups – in both commercial HPC and scientific research – have to support a diversity of user workloads across domains. Each workload can have vastly different performance profiles, a variety of requirements on storage protocols and their interfaces, and massive scalability needs. Add to this fault tolerance, manageability, and affordability at scale and you have one very demanding environment, indeed.

HPC storage illustration

Ingest, Process, Archive, and Share –
With Just One Unified Storage System

Quobyte facilitates the entire HPC workflow – no more silos, no more tedious and complex capacity planning, and better economics thanks to operational efficiency at scale.

HPC storage illustration

Powerful and Scalable Scratch Space

Scratch space often becomes a limiting factor when running an HPC job, because it’s either too small or too slow. And since its performance and capacity requirements can change significantly from job to job or user to user, quick and easy configuration and manageability are a must in modern HPC settings.

Quobyte is a high-performance, parallel file system that delivers the performance necessary – whether it be for workloads that require high throughput, parallel processing, small file operations in OpenFOAM, or large sequential file operations. It also scales, just add the resources you need, Quobyte takes care of the rest – and saves valuable admin time.

Key Benefits

  • High-performance – our distributed parallel file system delivers all the power you need
  • Massively scalable – start with a few drives and scale linearly to hundreds of PBs and beyond
  • All interfaces – access data through native clients for Linux, Windows, and macOS or use S3, NFS, SMB, and Hadoop
  • Hardware-independent – use commodity servers, with HDDs, SSDs, and NVMe
HPC storage illustration

Build Cross-Boundary Data Pipelines

Home Directories are the intermediate storage tier between high-performance, short-term scratch space and low performance, long-term archive. Other than price/performance targets, there is no difference between home and scratch.

With Quobyte you don’t need a separate tier for home directories. You can access the same files from all the storage interfaces to migrate applications, go back and forth, copy data while in the filesystem, have a preprocess running that modifies the file, download as S3, etc. You can build a pipeline that crosses the boundaries of storage interfaces because Quobyte software unifies the storage. As a result, management overhead is significantly reduced as are the operational costs.

Key Benefits

  • Consolidates scratch and home directory storage tiers
  • Reduces administrative overhead
  • Reduces operational costs
  • Enables greater collaboration among researchers
  • Facilitates data curation for later use and allows for policy-based movement to the archive
  • Real-time analytics allow for a great overview of storage usage
HPC storage illustration

Better Storage Economics

Easily manage all storage tiers in one system with performance isolation between the tiers and policy-based movement between tiers – so that “cold” data automatically moves from costly but fast SSDs to more economical HDDs for archiving.

Key Benefits

  • Hardware-independent – Your choice of HDD to manage price and performance targets that fit your needs and budget
  • Policy-based tiering – Easy to set up and manage to reduce cost and admin overhead
  • Unified storage – Data can move freely between scratch, home, and archive without tedious and time-consuming migrations between silos
  • No separate archive system required – Reduces costs and management overhead

System Requirements

All you need to get started is:

  • 1 hour of your time
  • 4+ servers with 1+ HDD or SSD
  • 16+GB RAM
  • IP network, starting at 10GB/s
  • Linux

Sample Architecture:
Archival Storage

  • 4U
  • Intel Xeon E5-2620v4
  • 32GB RAM
  • Avago 9300-8i
  • 48x 8TB SATA HDD
  • 2x 10GBit Ethernet

Sample Architecture:
High Throughput

  • 2U
  • 1x Intel Xeon E5-2640v4
  • 32GB RAM
  • Avago 9300-8i
  • 24x 8TB SATA SSD
  • 40/56Gbit Ethernet

Need to quickly find out more? Contact us or download Quobyte for a free 45 day trial.

Ready to get started?

Quobyte for Bioinformatics

Learn more about how Quobyte boosts bioinformatics workloads in microscopy and genomic research. It helps you save time and get to those medical breakthroughs faster.


Boost TensorFlow Workloads

As the first distributed file system, Quobyte offers a TensorFlow plugin, that increases throughput performance by up to 30%. The plugin allows TensorFlow applications to talk directly to Quobyte without going through the kernel.

Better performance
Quobyte reduces kernel mode transitions and lowers CPU usage. This increases utilization of the GPU to speed up model training and the inference stage of the ML workflow. Hence, Quobyte’s performance and scalability help you train faster across larger data sets for higher accuracy results.

More Flexibility
Quobyte’s TensorFlow filesystem plugin can be used with most any Linux system; even older versions can be used because it bypasses the kernel. It can also be used with Google Cloud Platform (GCP) for model training: train models locally on sample data sets and use GCP for training at scale.