High Performance Computing: Stop Scaling the Hard Way

By  Aimee Davis  on  

“When high-performance computing (HPC) began in the 1990s, hardware platforms were proprietary and limited in their reliability, which wasn’t much of a problem, because workloads were relatively small.

Nearly thirty years later, workloads have grown potentially to hundreds of petabytes, and early HPC cluster architecture has failed to scale effectively, especially on reliability.

Google solved the scaling problem by embracing rather than fighting failure. The company did this through software, eliminating traditional hardware dependency and allowing even thousands of nodes to perform as a single self-healing entity.

The HPC world can take a quantum leap forward in performance, efficiency, and ROI by following Google’s example and leaving behind its decades-old, hardware-centric roots.”

– Bjorn Koelbeck, CEO of Quobyte

To hear more of Bjorn’s thoughts on HPC, check out his recent article in Inside HPC.

Photo of Aimee Davis

Written by