Instrumented Interconnecteds Intelligent

John Kelly, SVP, IBM Research

Dr. John Kelly III, SVP, IBM Research

By Dr. John E. Kelly III

The microprocessor was one of the most important inventions of the 20th century. Those chips of silicon and copper have come to play such a vital role that they’re frequently referred to as the “brains” of the computer. Today’s computer designs put the processor at the center.

But the needs of businesses and society are changing rapidly, so the computer industry must respond with a new approach to computer design—which we at IBM call data-centric computing. In the future, much of the processing will move to where the data resides, whether that’s within a single computer, in a network or out on the cloud. Microprocessors will still be vitally important, but their work will be divided up.

This shift is necessary because of the explosion of big data. Every day, society generates an estimated 2.5 billion gigabytes of data—everything from corporate ledgers to individual health records to personal Tweets.

YouTube Preview Image

Because of the fundamental architecture of computing, data has to be moved repeatedly from where it’s stored to the microprocessor. That consumes a lot of time and energy. And now, with the emergence of the big data phenomenon, it’s no longer sustainable. That’s why we need to turn computing inside out—moving processing to the data.

Over time, the shift will have huge consequences for everybody—from the managers of high-end data centers to kids playing games on their smartphones.

An early step toward data-centric computing came today, when the United States’ Oak Ridge National Laboratory and Lawrence Livermore National Laboratory announced they’re investing $325 million to purchase two supercomputing systems based in part on this new approach. IBM is developing Oak Ridge’s Summit and Lawrence Livermore’s Sierra systems based on our POWER microprocessors, and in collaboration with technology partners NVIDIA and Mellanox.

Rather than relying solely on the central processing unit, or CPU, to do their data crunching, these systems in addition use specialized graphics processors, or GPUs, from NVIDIA to handle some of the data processing tasks. The GPUs and POWER CPUs are tightly coupled to memory chips where often-used data is stored, and the CPU and GPU elements talk to one another via Mellanox’s interconnect.

When Summit and Sierra are delivered starting in 2017, they are expected to achieve five to 10 times the processing performance of current supercomputers. But raw computation is only part of the story. Just as important, a series of system and software innovations will enable the computers to efficiently handle a wider array of analytics and big data applications.

IBM’s Blue Gene supercomputers made great leaps forward in energy efficiency. Summit and Sierra represent great advances in data efficiency along with significant improvements in energy efficiency.

That’s important because the national laboratories offer researchers from academia, government, and industry access to time on their computers to address grand challenges in science and engineering. Traditionally, the labs’ computers have been optimized to handle hardcore scientific problem solving, using techniques such as modeling and simulation. But, increasingly, researchers are seeking help with projects in diverse domains such as healthcare, genomics, economics, financial systems, social behavior and visualization of large and complex data sets. They need systems that help them manage and sort data, not just run algorithms.

Here are some examples of the new capabilities enabled by these systems:

Healthcare: Pharmaceutical researchers will be able to better simulate the interactions of molecules to identify patterns that will help their companies develop drugs to target specific cells.

Energy: Engineers will be able to increase the use of wind energy by designing efficient wind turbines that can withstand the elements in geographic regions known for inclement weather.

Air Travel: Metallurgists and mechanical engineers will be able to design better jet engines to withstand heat and other stresses, for faster, more efficient air travel.

Big data and analytics applications benefit from increased flexibility in the way computer systems are designed. In traditional computing, much of the innovation happens on and around the CPU. In a sense, it’s a one-size-fits-all world. As the big data phenomenon grows, we believe, innovation will increasingly take place throughout the computer system. Moving processing closer to the data will be one locus of innovation; and there will be others.

That’s why IBM has opened up its POWER processor architecture for others to use and build upon—through the OpenPOWER Foundation. By opening up POWER for innovation and collaboration, we make it easier for makers of server computers and companies like Google to design computers that fit their needs like a custom-tailored suit. Another plus: the companies who are members of the Foundation innovate continuously to create new capabilities and address new challenges. They’re not limited by the product upgrade cycles of a single microprocessor supplier.

Revolutions typically start imperceptibly. A shift takes place out of plain sight, but it sets in motion a series of actions and reactions. Eventually, a pattern begins to emerge. Then it gains momentum. So it will be with data-centric computing.

IBM’s aha! moment came in 2011, when David Turek, IBM’s vice-president for exascale computing asked a roomful of IBM technologists a seemingly dumb question: “How much does it cost to move a single bit of data from point of origin to point of computation?” People in the computer industry hadn’t been asking that question. It took a while for the team to dig up answers, but, when they did it became readily apparent that the industry had to begin to address the costs in time, money and energy of moving large volumes of data within computing systems and networks. Out of that revelation came our data-centric design initiative.

Today, the concept is beginning to find its way into mainstream computer design. For instance, IBM recently introduced a product called the Elastic Storage Server, which tightly packages servers and disk drives in a single appliance-like device.

The changes will come in technical computing first, but, eventually, data-centric computing will become pervasive. Social networking Web sites gather and move vast amounts of data. That’s a job for data-centric design. The same is true of e-commerce, and for organizational functions such as marketing, financial management and product development.

Eventually, the new approach will transform the computer on your desk and in your hand. Because of the new design paradigm, they’ll work faster and smarter. They’ll handle more data. And they’ll use less power. We’re at the front end of one of the most significant shifts in the history of computing.

———-

To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.

Bookmark and Share

Previous post

Next post

9 Comments
 
July 14, 2016
8:03 am

Latest movies watch here,really awesome site


Posted by: free movie sites
 
May 25, 2016
6:39 am

hey how are sir your discussion about develop environment is great and interesting because every time after reading your blog i select new point about new development.


Posted by: xender for ios
 
July 6, 2015
9:11 am

great work


Posted by: Jerusha
 
June 15, 2015
3:16 pm

Innovation should respond to problems or gaps and this new approach will address that. Bravo from a business owner


Posted by: tabby
 
April 30, 2015
7:49 am

Now we are having more improved alarm system with the advent of new technology, where the
motion detectors act much like wires but they use infrared beams instead of
wires. The profession of a locksmith has also become specialized.
If you don’t have a deicer, heating up the key could offer a quick fix.


Posted by: schlüsseldienst berlin lichtenrade
 
November 21, 2014
5:22 am

er … quadcore CPUs on Smartphones ARE processors nearer the data, aren’t they ? Except the data is currently far less than what these processors can do with it.

Also, it’s not simply where processors and data are, relative to each other. It’s where data is, relative to the user.
Cloud model pretty much forces user to send data near the processor.


Posted by: enkidu
 
November 20, 2014
9:40 am

One of our clients happened to say this; he is a Theorotical Phyisicist with a Central University and has been using HPC systems for running complex compute jobs on Fortran., this is what he said “The human mind does not compute but synthesises, and that is where computing will head to” and he said that “the future computers from IBM will be synthesizing and not computing”…a remarkable point made. Let me know if someone wants to get connected, we can connect with him.


Posted by: Kalyan
 
November 18, 2014
9:58 am

It’s deceptively easy to switch words around and say “instead of moving the data to the processor, let’s move the processor to the data,” but when you think about it, it’s definitely much harder to do than to say. Data is just abstract stuff in electronic form. Processors, on the other hand, are actual, physical hardware. Given this, it makes sense to move the data to the processor–it would seem to be the easier way to do it. So one has to carefully define what one is doing and how it is supposed to help. Does moving the processor to the data mean moving the process closer to the data storage devices in the pipeline? Does it mean decentralizing the processing functions or minimizing (centralizing) the processing functions? Or are there other ways of looking at data and data processing?

This all sounds highly technical to me, but a good conceptual view that bypasses the buzz words would be invaluable for re-imagining the computing process.


Posted by: Michael Clem
 
November 17, 2014
6:33 pm

“the industry had to begin to address the costs in time, money and energy of moving large volumes of data within computing systems and networks”
Worth noting that IBM is also addressing the speed and security of data transfer via our recent Aspera acquisition. No harm in attacking the problem from both ends…


Posted by: Colin Chesterman
 
6 Trackbacks
 
October 21, 2015
12:00 pm

[…] With help from National Laboratories scientists, teams of IBMers have produced five generations of supercomputers–repeatedly ranking among the fastest machines in the world. The journey led us to where we are today: developing a sixth generation of computers, data-centric systems designed from the ground up for the era of big data and cognitive computing. […]


Posted by: THINK What it Takes to Reinvent Supercomputing–Over and Over Again
 
June 4, 2015
7:05 am

[…] Big Data is comprised of all of the unstructured data in the world–digital documents, Web sites, social media interactions, photos and videos, plus sensor data from the Internet of Things. In the Big Data era, it requires too much money and electrical energy to move all that data around within networks and computing systems, so we have to bring the data and the processing closer together. That will be done both by moving some of the processing to where the data is stored and by caching more of the data that’s being worked on closer to the processors. In a nutshell, that’s data-centric computing. […]


Posted by: IBM Helps the UK to Harness the Power of Big Data « A Smarter Planet Blog A Smarter Planet Blog
 
April 24, 2015
5:37 am

[…] Over the last several years, segment by segment, the Big Data phenomenon has intruded on the floating point-centric value scheme and begun to force consideration of alternative measurements for assessing system value. It’s time for a new approach, which we call data centric computing. […]


Posted by: The Ever-Evolving Supercomputer in the Era of Big Data - The MSP Hub
 
April 13, 2015
4:28 am

[…] Over the last several years, segment by segment, the Big Data phenomenon has intruded on the floating point-centric value scheme and begun to force consideration of alternative measurements for assessing system value. It’s time for a new approach, which we calldata centric computing. […]


Posted by: The Ever-Evolving Supercomputer in the Era of Big Data | The ISV Hub
 
April 9, 2015
10:41 am

[…] these data should impact system design. In IBM’s case we have expressly pursued a notion of Data Centric Systems that look to minimize data movement in a supercomputer to radically reduce data-movement induced […]


Posted by: The Ever-Evolving Supercomputer in the Era of Big Data « A Smarter Planet Blog A Smarter Planet Blog
 
November 14, 2014
7:28 pm

[…] address this issue, for the past five years IBM researchers have pioneered a new “data centric” approach — an architecture that embeds compute power everywhere data resides in the system, […]


Posted by: IBM developing 100-petaflops supercomputers for national labs | Bruce's Blog
 
Post a Comment