By Dave Turek
The high-performance computing (HPC) community is changing, and changing fast.
For some time the HPC community, consumers and vendors alike, have turned to floating point capability as the ultimate judge of the value of a supercomputer: the more floating-point operations per second (flops) you have, the better the system you have.
This mantra has been instilled through the bi-annual publication of the TOP500 list, which utilizes a benchmark that favors systems heavy on floating point capability. And, to be honest, this was quite useful for a very long time. But the world has evolved, and so have supercomputers. Big Data has changed everything. Combing through huge amounts of consumer data or years of financial statistics requires a new model and a new yardstick.
Over the last several years, segment by segment, the Big Data phenomenon has intruded on the floating point-centric value scheme and begun to force consideration of alternative measurements for assessing system value. It’s time for a new approach, which we call data centric computing. Continue Reading »