By Dr. John E. Kelly III
This week, President Obama issued an executive order establishing the National Strategic Computing Initiative with the goal of ensuring that the United States leads in the field of high-performance computing. The initiative is aimed at producing computers capable of exascale performance–which is one billion billion operations per second, orders of magnitude faster than today’s most powerful computers.
IBM has been a pacesetter in large-scale computing ever since modern computers emerged in the 1940s. We have collaborated with the US government in producing and deploying computers in the national laboratories and government agencies that help the country retain its leadership in science and commerce, as well as safeguarding national security.
We believe it won’t be possible to achieve exascale performance in a way that is affordable and sustainable by following the path that computer scientists have been on for decades. Instead, it’s necessary to develop a bold new approach, which we call data centric computing, which addresses both the modeling and simulation applications that are the traditional focus of the high-performance computing community and today’s new applications in big data analytics and cognitive computing.
For years, the computer industry has made tremendous progress based on computing architecture first laid out by John von Neumann in the 1940s. But von Neumann’s principles didn’t anticipate today’s data processing challenges. Every day, society generates an estimated 2.5 billion gigabytes of data—everything from corporate ledgers to individual health records to data from devices connected to the Internet of things to personal Tweets.
Because of the fundamental architecture of traditional computing, data has to be moved repeatedly from where it’s stored to the microprocessor. That consumes a lot of time and energy. And now, with the emergence of the big data phenomenon, it’s no longer sustainable.
In the future, much of the processing will move to where the data resides, whether that’s within a single computer, in a network or out on the cloud. Microprocessors will still be vitally important, but their work will be divided up among a variety of specialized chips.
Today, we’re working with the Oak Ridge National Laboratory and Lawrence Livermore National Laboratory to develop new computer systems based on this new approach. When the computers are delivered starting in 2017, they are expected to achieve five to 10 times the processing performance of current supercomputers. But raw computation is only part of the story. Just as important, a series of system and software innovations will enable the computers to efficiently handle a wider array of analytics and big data applications.
Researchers from academia, government, and industry will use these computers to address grand challenges in science and engineering. Traditionally, the focus has been on optimizing computer systems to handle hardcore scientific problem solving with a focus on modeling and simulation–classic HPC. But, increasingly, researchers in diverse domains such as healthcare, genomics, financial analytics, and social behavior have seen the need as well for the analysis and visualization of large and complex data sets.They need systems that help them manage, curate, and analyze data to produce real insights.
The shift to data centric computing will be accelerated by the work done in our national laboratories, but, we are seeing these requirements today and, eventually, this new approach will become pervasive as increasingly complex analysis is required over a wide range of applications. Social networking Web sites gather and move vast amounts of data. That’s a job for data centric design. The same is true of e-commerce, and for organizational functions such as marketing, financial management and product development.
Eventually, the new approach will transform the computer on your desk and in your hand. Because of the new design paradigm, they’ll work faster and smarter. They’ll handle more data. And they’ll use less power.
This is the beginning of one of the most significant shifts in the history of computing, and we at IBM look forward to working with our colleagues in government, industry and academia to turn the concept of data centric computing into a full-blown reality.
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.