By Dave Turek
The high-performance computing (HPC) community is changing, and changing fast.
For some time the HPC community, consumers and vendors alike, have turned to floating point capability as the ultimate judge of the value of a supercomputer: the more floating-point operations per second (flops) you have, the better the system you have.
This mantra has been instilled through the bi-annual publication of the TOP500 list, which utilizes a benchmark that favors systems heavy on floating point capability. And, to be honest, this was quite useful for a very long time. But the world has evolved, and so have supercomputers. Big Data has changed everything. Combing through huge amounts of consumer data or years of financial statistics requires a new model and a new yardstick.
Over the last several years, segment by segment, the Big Data phenomenon has intruded on the floating point-centric value scheme and begun to force consideration of alternative measurements for assessing system value. It’s time for a new approach, which we call data centric computing.
In a recent paper published by Cabot, there is an argument made that the buying decision for supercomputers in the Big Data era should depend as much on things like memory bandwidth and integer performance as it does on floating point capability. There is an empirical basis for this proposition: substantial application bench-marking by IBM has shown over and over again that the correlation of floating point capability to application performance is relatively weak.
This set of facts is not going entirely unnoticed and there are many different interpretations on how these data should impact system design. In IBM’s case we have expressly pursued a notion of Data Centric Systems that look to minimize data movement in a supercomputer to radically reduce data-movement induced latency. But there are other ideas forthcoming as well.
Through the OpenPOWER Foundation, now with over 115 members, we have seen much interest in companies looking to produce HPC systems with diverse kinds of innovation. One of OpenPOWER’s newest members, Cirrascale, highlighted some of their plans at the OpenPOWER Summit in San Jose. Cirrascale CEO Dave Driggers recently blogged on how OpenPOWER is helping address the HPC industry’s incessant need to keep exploring new avenues of technology to meet growing compute-intensive requirements.
This is just the beginning of a transformative period in HPC during which choice is expected to expand and innovation is expected to blossom.