Quobyte scale out HPC file system helps Oregon State University deliver performance whilst lowering TCO

The Center for Genome Research and Biocomputing at Oregon State University (OSU) is the high performance compute backbone that thousands of researchers across dozens of disciplines rely on to get their jobs done. As workloads have become increasingly intensive, their legacy file storage system has struggled to keep up with demand. By implementing Quobyte’s Data Center File System, the administrators have created a more robust, user-friendly, and high-powered environment for their researchers while simultaneously simplifying their jobs without going over budget.

The Center for Genome Research and Biocomputing at Oregon State University (OSU) facilitates data-driven research into life and environmental sciences, spanning a host of fields that include engineering, forestry, pharmacy, public health, and more. Over 2,000 users, academics, and students rely on the high performance computing infrastructure that the center provides to run simulations, process data, train algorithms, and store assets.

At OSU, computing enables research. That’s why they run a 5,000+ node cluster over a 100-gigabit network. They have over eight PB of data currently online, and they found that their old file system, ZFS, simply wasn’t up for the task of managing all that data. Once they decided to leave ZFS behind, they began testing other systems, such as Lustre, but realized the labor costs associated with managing such a fragile system were simply unsustainable on their grant-based budget.

That’s when they turned to Quobyte.

I love the model that allows you to scale with your dollars and your return…That’s why we love Quobyte. And the support is enterprise-class, top notch.

The Challenge: Finding a High Performance, Low Maintenance File System on a Tight Budget

The Center for Genome Research at OSU faces many problems that are familiar to academics and researchers. At the forefront is the incredibly large projects that generate tremendous amounts of both initial and downstream data. By design, this necessitates a cost-effective, expandable, and controllable file system.

Second, their budget is tightly defined by grant funding. Chris Sullivan, the center’s Associate Director for Biocomputing, explained that “grants tend to fund postdocs, graduate students, principal investigators, and their labor—not hardware. And so, we always have to do better work with the money we have to do it.” This means that they weren’t willing to pay extra for the specialized hardware required by many other HPC storage solutions and that, since grant funding comes one at a time, they also couldn’t afford the ongoing labor costs of running a difficult to manage file system.

On top of their need for a robust solution at a reasonable cost, they needed a highly configurable solution. They tried other software like BeeGFS, but they encountered “tremendous limits imposed upon us for configuration.” Sullivan continued, “We had no ability to create tenancy. We had limited capability to extend the way we wanted to. And the way that the volumes worked, we were stuck into how we were configured.”

The Solution: Quobyte’s Unconditional Simplicity

Once they began testing Quobyte, Sullivan realized “right out the gate that Quobyte had the reconfigurable flexibility that we were looking for, that came from some of the other larger, more expensive file systems.” The solution’s unconditional simplicity enabled them to operate like a hyperscaler, maintain flexibility in their configuration, and even manage the system without ever needing to take it offline.

For instance, labs will purchase a certain amount of disk capacity for their research, and Sullivan and his team can now use Quobyte to immediately “carve a chunk for them with a quota, size it dynamically, and hand it off to them, all while managing it through the web console or the command-line client.” This solved their tenancy issues while also providing a variety of other benefits, such as linearly scalable performance, eliminating requirements for any specialized hardware, and built-in fault tolerance.

For OSU, the icing on the cake was Quobyte’s “enterprise-class support.” Sullivan explained that as they struggled to manage other file systems like BeeGFS, they never received the support that they needed in a timely manner. “It was answers that we needed to move us forward on solutions,” he said about Quobyte, “and that created respect and a working relationship that we needed with our support techs.”

And, of course, we can’t forget the budget. “Working with Quobyte, we were able to get the cost models right on target,” concluded Sullivan.

The Results: Configurability, Scalability, and Value

After thoroughly testing all their options, the HPC team at OSU’s Center for Genome Research and Biocomputing chose Quobyte and the results have followed. “I’m not going to deploy something unless it’s perfect,” Sullivan explained. “Once it’s deployed, it’s very hard to pull it back from the research community.”

Sullivan concluded that Quobyte was the perfect solution for a few key reasons. First, they loved the ability to manage their configurations on the fly, which enabled them to “create tenancy quality of service that we didn’t have in any of the other ones.” That means high uptime, greater data security, and more streamlined resource allocation. As a result, when we spoke to them, they had 350 active clients and around 500TB of disk on order.

Second, they were amazed at Quobyte’s ability to limitlessly scale by adding more infrastructure. “The more you put behind it, the better it gets,” said Sullivan. “I love the model that allows you to scale with your dollars and your return…I get more machines, I get more drives, and then my users get more every single time we put something there.” They were impressed that Quobyte uncapped their potential, and they look forward to using the system to expand their capabilities as their institution’s research continues to grow.

Third, OSU appreciates how Quobyte enabled them to cut their ongoing labor costs to stay well within their budget. Not only does this free up more time for administrators, but it also opens the door to allocate more resources for hardware build-outs or other performance boosting expenditures. The combination of simplified storage with Quobyte’s professional support saves time and money.

Lastly, Sullivan reported on what he called “the telltale sign of success on any system.” Namely, he pointed to whether or not the users embrace it. He goes on to explain, “We implemented Quobyte in August. We released it to the general research public here at Oregon State. And I would say, by December, we started hammering orders. People wanted disks. They wanted on, they wanted off the ZFS. And so, to us, that’s the telltale sign.”

Overall, Quobyte delivers performance to OSU’s research community, simplicity to the administrators who run their HPC systems, and the cost reduction that they need to remain within their grant funding.

“The people like it,” concluded Sullivan. “That’s the point.”