Product

Announcing Quobyte 1.3 – Parallel File System with Erasure Coding

By  Felix Hupfeld  on  

The first week of August not only marks Quobyte’s third birthday, but also the release of Quobyte 1.3. With more than 6 months of engineering to back it up, the updated version contains countless improvements and exciting new features. They make Quobyte a Google-style storage infrastructure for the large variety of real-world workloads.

Shiny New Features – Erasure Coding and More

While Quobyte 1.2 focused mainly on scalable and fault-tolerant high-performance block storage (thanks to which it’s now driving a public OpenStack cloud), the updated Quobyte 1.3 brings our parallel file system core to the the next level. It can be used for all file system workloads, from filers over container platforms to HPC clusters. The main achievements are:

  • High performance metadata. Quobyte is able to scale to 1000s of file creates per second on a single file system volume. And keep in mind that Quobyte doesn’t place a limit on the number of volumes, so it actually scales nicely.
  • Direct erasure coding support. With erasure coding, data that is written from beginning to end can be stored very efficiently while improving data safety. By “efficiently” we mean a replication blow-up of a factor of a mere 1.5 instead of 3. With the addition, Quobyte becomes an extremely economic storage system for data intensive primary storage use cases and secondary storage. Primary storage profits from high throughput and efficiency for this kind of data, while secondary storage receives maximum protection with erasure coding and end-to-end checksums. (Update: More on erasure coding in Felix’ more recent post!)
  • Unified ACLs across native clients and NFS. Extending ACLs for S3 and CIFS is scheduled for version 1.4.

Also, our direct IO path received special attention. We’ve improved Quobyte’s random IO performance, so that it now consistently delivers sub-millisecond latencies on high concurrency block IO on flash (while doing quorum replication, of course).

Detail of Dashboard for Erasure Coding Setting

Extending Platforms

No matter if you need file, block or object storage, we always make sure that data is accessible across all interfaces. Quobyte extends the coverage in 1.3 with a native Windows client. Windows workstations and servers can now directly access Quobyte as a parallel file system, with all the advantages for performance and fault-tolerance.

Improved Management

There’s also been progress on the system management side:

  • Quobyte now sports full multi-tenancy support. We integrated multi-tenancy with quotas and accounting. (More on this shortly)
  • System setup and maintenance is simplified through a new intelligent automation system, which we call health manager.

Container-Native Persistent Storage

As you may know, Quobyte is a fully compatible POSIX file system, which means it behaves just like a local file system. That makes it the perfect storage foundation for container infrastructures like Mesos, Docker, or Kubernetes. To make the integration even easier, we added two unique features to Quobyte to specifically support container applications:

  • We extended Quobyte’s fault-tolerant lock mechanisms in order to provide locking transparently for any containerized applications. That way any application can be run as a highly-available fault-tolerant setup.
  • We solved the access control problem for containerized applications. Quobyte 1.3 can bind containers to specific users and groups and control a container’s access by means of normal file system access control. That means no more one-volume-per-container setups, but, rather, enables fine grained data sharing.

No matter which container infrastructure you run, you are ready to go:

 

Go ahead and give it a try or get in touch!

Photo of Felix Hupfeld

Written by

Felix is Quobyte’s co-founder and CTO.