I was at a conference of cryo-EM scientists recently, and if you read our recent blog posts about cryo-EM and 3D protein modelling, you know that these scientists are some of the most data-hungry, bandwidth-constrained specialists in the world. They need infrastructure able to absorb and manage petabytes for both immediate, large-scale analysis and long-term, full-fidelity storage. While in a discussion with several of these scientists, I asked if conventional storage solutions were their only problem or was getting past the cultural and financial barriers of central IT also a key issue?
Heads bobbed. The scientists looked at one another knowingly. A couple made “mm-hmmm” sounds of agreement, but no one wanted to articulate their feelings.
I get it. Different groups have different practices and priorities, and those with experience understand the dangers of rushing to judge another group’s operations.
Still, I’d touched a nerve. Something important needed to be examined.
Not long after this, I spoke with our storage contacts at Gartner, and that conversation nudged me toward more thoughts. Essentially, Cryo-EM scientists are just another breed of application users. And enterprise application users, not surprisingly, tend to be frustrated with central IT.
I have a lot of respect for Gartner research. One Gartner VP analyst, Julia Palmer, has nearly three decades in the IT industry, two of which have focused on various areas within storage. Her “2022 Strategic Roadmap for Storage,” published in March, delves into storage-as-a-service (STaaS) adoption and how distributed file systems can address a widening array of use cases while reducing solution complexity and costs. At the same time, I know from talking with people from scientists to everyday IT workers that such technologies, while very much needed, are not widely adopted. Why? Because the technologies and strategies that brought IT success 10 and 20 years ago have become so entrenched that they may now be interfering with further progress.
Consider things like virtualization, LUNs, and filers — technologies that enabled excellent storage advances in their day. No wonder they became IT staples, and IT learned how to deploy and manage them with great success. Today, however, these technologies are comparatively expensive for many use cases. They may be relatively slow, difficult to deploy, and resistant to scaling, all of which makes them unsuited to a world where demand now stems from scale-out applications, not application servers and databases. These are “old school” technologies, centralized dinosaurs that just sit there doing what they have always done while the world evolves around them.
The cryo-EM and enterprise application owners are in the same boat. They have new use cases generating potentially tens to hundreds of petabytes of data. They go to IT for a storage solution, which gives them overpriced, unscalable volumes. Often, application owners turn to the cloud for a better answer, only to be met with performance, management, and hidden cost issues.
And honestly — no one is at fault. Application owners need solutions within the budget and infrastructure boundaries placed by their organizations. IT can’t be expected to campaign for change when they spend every day just trying to put out fires. Everyone is just trying to do their job. But this delay in progress, wherein every day has its own justifiable excuse, holds businesses back because innovation takes too long.
For you history buffs, think of the fall of Constantinople in 1453 to the Ottomans. Constantinople had survived the fall of the Roman Empire and continued as the heart of Eastern Europe for over a thousand years. At the time, a Hungarian engineer named Orban had devised a way of casting large siege cannon, marking a significant technological advance. Orban tried to sell his services to the Byzantine emperor Constantine XI, but he was turned away. Quite simply, Constantine said he didn’t have the capex resources to come up with the materials and rollout (infrastructure) nor the opex resources to afford Orban and his project’s maintenance. After all, Constantinople’s walls were legendary and had protected the city for centuries. Constantinople would be just fine, thank you. Orban hopped over to the competition, namely the young Ottoman leader Sultan Mehmed II. Mehmed invested in the new technology and soon thereafter used it to launch his siege on the Byzantine capital. The supposedly impregnable Constantinople fell to the Ottomans less than two months later.
The moral is clear: Those who fail to invest in modern technological advantages will have to face the consequences against those who do. Innovate or die.
Enterprise Storage Priorities
After all these conversations with industry analysts and in-the-trenches application owners, I had much food for thought. As a result, I’ve come to recognize several factors that distinguish a dinosaur-class storage solution (let’s call it a “dinostore”) from an optimal, modern, cloud-scale solution.
- Forward-looking storage is cloud/exabyte-scale, based on easy software that can turn all servers into a singular, self-healing storage platform.
Quobyte does not require investment in proprietary appliances. It harnesses existing infrastructure and effortlessly integrates additional scaling, all within the management of a platform that can be configured by literally anyone with the most basic experience in storage configuration.
- Like any cloud service, on-premises storage should be non-disruptive, meaning it features the resiliency necessary to operate 24/7 no matter what.
Quobyte delivers exceptional fault tolerance, allowing application owners to work without maintenance windows. And because of Quobyte’s remarkable latency tolerance, the platform can deploy redundancy across multiple sites with potentially international range, all without sacrificing the usability of strong consistency and POSIX semantics.
- Flash and HDD media are complementary in mass-scale storage, allowing the most flexible, performant storage solutions at optimal costs.
Quobyte allows users to tier and/or allocate different storage media to whichever storage assets are best suited to given tasks.
- Modern platforms should enable both file and object storage, even in the same namespace, without creating data silos.
Quobyte makes such flexible data architectures simple to implement. We designed the platform with hyperscale foundations, then covered it with a deceptively simple interface and API.
- Storage should be location-independent.
Organizations should be free to seize on the respective advantages of keeping data on-premises, in colos, at the edge, or in the cloud. Quobyte seamlessly bridges all of these, thus removing silos and removing location restrictions.
- Security should never be an afterthought.
Some storage platforms descend from the Paleozoic periods before features such as end-to-end encryption were natively supported. Quobyte was built from the ground up with security in mind for data both at-rest and in-transit.
In so many ways, Quobyte makes life easier. This should resonate with central IT, which struggles every day exactly because persisting with legacy solutions is so difficult. This may seem counterintuitive. Dinostore infrastructure is deployed and paid for. It’s easy to keep doing the same thing and be comfortable in thinking things are good enough. Emperor Constantine XI thought so, too.
Quobyte will help organizations adapt and evolve, with greater agility, scalability, resiliency, and cost savings. Whether you’re an application owner or already in central IT, it’s low-risk and straightforward to give Quobyte a quick trial. It’s time to evolve your storage.