Product

Are You a Next-Gen Infrastructure Person?

By  Victoria Koepnick  on  

If you work for a vendor of enterprise IT infrastructure, one of the best moments of your day or even your week is when you’re speaking to a prospective customer’s team and suddenly — Aha! — they get it. They understand your product, service, solution; they understand how its engineering solves their problem or saves them money.

The Aha! Moment

That ‘they get it’ moment is more common in those working with HPC applications, machine learning/AI, media and entertainment, electronic design automation (EDA), and other high-performance, data-hungry production environments. It’s more common in people working within newer infrastructure models, such as containers or OpenStack. It’s more common in people managing workloads of extremely large volumes of small files, or managing metadata.

Call it the Infrastructure Aha! Gap – between those working with next-gen infrastructures in the current post-web era where distributed systems are the norm, and those who aren’t. Yet.

In traditional storage infrastructures, some concepts have stuck – file systems are traditionally rigid, difficult to scale; reliability and availability comes from redundant hardware; if you need reliability and performance, it’s worth it to pay a premium for brand-name storage. Users of such traditional infrastructures are unlikely to have that ‘they get it’ moment.

Next-Gen Infrastructure

Next-gen infrastructure people are different. They don’t want to architect separate storage systems for block, file, and object. They refuse to sacrifice performance for scalability. They don’t want to think about downtime, whether planned or unplanned. They require storage that is runtime-configurable and can accommodate changes on the fly. They tend to have a broader perspective that includes storage as part of an infrastructure more similar to the early hyperscalers.

Typically when people get it, it’s because they have issues meeting capacity or performance requirements. It’s not only the pressure to maintain more data, it’s more of everything: more users, more compute nodes, more results delivered. And many times, there’s no additional budget to meet the growing needs.

These issues are common in “commercial HPC” or HPC-like workloads, where growing data volumes impact operations. Life sciences are a prime example with their DNA sequencing or high-resolution medical imaging, as are earth sciences such as geographic analysis for exploration. We see the same kinds of pressures in media enterprises doing editing, rendering, transcoding, and visual effects work. In the financial services industry it is in backtesting of historical data and analytics for forecasting or modeling.

Not only is traditional storage inadequate, its economics are too. The storage requirements next year may be 10x what they are today, but the budget and the staff won’t grow in equal measure. Today there are petabytes, and, before long, easily multiple exabytes. The sheer cost of using proprietary storage arrays to accomplish this, even if it were technically possible, would be inconceivable.

Removing Growth Inhibitors

One group who increasingly gets it, is CIOs/CTOs, because they ultimately have responsibility for ensuring that IT will not be an inhibitor to the growth of the organization. An animation studio can’t have its creative artists stifled by lack of throughput. Climate science research cannot be disrupted by insufficient data processing power. A bio IT company cannot be held back by an inability to use the newest tools for genome editing.

Technology restrictions cannot dictate operational decisions, and storage cannot be the bottleneck to progress. We get it. Do you?

Photo of Victoria Koepnick

Written by

Victoria is the VP, Marketing for Quobyte. She continues to be living the dream of bringing hot new data storage technologies and products to market.