Quobyte MPI-IO Support with Kernel-Bypass Now Available

By  Alexey Karandashev  on  

We are pleased to announce our cooperation with MPICH and the release of our Quobyte I/O backend driver on the main branch of MPICH, available publicly today. This driver uses the user space Quobyte library, libquobyte, to communicate directly with the Quobyte filesystem, allowing MPICH-built MPI programs to circumvent the native system I/O operations.MPIIO vs. POSIX

Bypassing the kernel I/O operations reduces data copying, as we no longer need to pass read and write information back and forth from the kernel. This also improves performance by relieving the kernel from expensive context switches. Therefore, more CPU and memory-bandwidth is available for compute jobs, and operation latency is reduced.


As a true parallel file system, Quobyte supports concurrent lockless read/write access to the same file, as well as file striping across a large number of drives and servers for high throughput. Quobyte enables you to harness the aggregated performance of your storage cluster, even for a single file if necessary.

If you are familiar with building MPICH, building it with the Quobyte I/O backend driver is easy. Simply add a flag to the build process. Before you start, make sure that you have the libquobyte package installed on all your compute nodes.

On the compiler host, you’ll need the following additional packages*:

  • libquobyte-devel
  • git
  • gcc
  • automake >= 1.15
  • libtool >= 2.4.4
  • autoconf >= 2.69

* Notice, the correct automake, libtool, and autoconf versions are not delivered by default on RedHat/CentOS/Oracle Linux/Scientific Linux 6 or 7. It is necessary to either build and install them manually, find trustworthy precompiled binaries or use CentOS/RHEL/OEL 8!

Once these are installed you can download the source code and build the latest MPICH release:

  1. Clone current MPICH and enter the directory. For details, refer to the the MPICH guide:

    git clone
    cd mpich
    1. Generate the configure file according to MPICH instructions. We excluded UCx in our build, but feel free to adjust the settings to your needs:

      ./ -without-ucx
    2. Run configure for MPICH with quobytefs enabled, and add additional parameters when necessary (prefix etc):

      ./configure --with-file-system=ufs+nfs+quobytefs --with-device=ch3
    3. Run make install:

      make install
    4. Don’t forget to distribute the binaries to your compute nodes, e.g. through a shared Quobyte mount 
    5. Finally, allow unaligned direct IO in your libquobyte config:
      1. In the system-wide configuration file in  /etc/quobyte/client.cfg

      2. In a custom config file, e.g. in your home directory:

        export QUOBYTE_CLIENT_OPTIONS="-c /home/user/quobyte.cfg"

        Then add the config line shown above to /home/user/quobyte.cfg

      3. Or using the environment variable ‘QUOBYTE_CLIENT_OPTIONS’, which is only temporary and applies only to your current shell session:
        export QUOBYTE_CLIENT_OPTIONS="--enable-unaligned-direct-io 1"

Now, you can start your jobs running on Quobyte. File paths starting with “quobyte://” will be handled by our MPI-IO driver, and all IO calls will be sent directly from user space without kernel involvement. The format for quobyte URLs is “quobyte://<registry DNS name>/volume/path“, e.g. “quobyte://”

To test your setup you can run the ROMIO Test Suite:

cd mpich
src/mpi/romio/test/runtests -fname=quobyte://<registry address>/<volume>[/<path>]/test_mpi_file

To optimize security and access control, you can take advantage of Quobyte’s certificate support. This enables you to provide users with X.509 certificates for secure access from user space applications. IO from the process will be limited to the user who owns the certificate, even if they have root privileges on the machines. The certificate location can be configured in the config file, which is specified in the QUOBYTE_CLIENT_OPTIONS (see above).

For instructions on how to configure certificates in Quobyte and how to restrict them to specific users, please visit our service authentication support page.

Photo of Alexey Karandashev

Written by