By Dharmendra S. Modha
Sixty years ago, in the face of tremendous skepticism, IBM engineer John Backus set out to radically change the economics of scientific computing on the IBM 704 by making programming much cheaper, faster, and reliable. The language that he and his colleagues developed—FORTRAN—became the first widely used high-level programming language. It laid the groundwork for the software industry as we know it and the waves of transformation that computing has brought to industry, science, government and society. The importance of FORTRAN is hard to overestimate as demonstrated by O’Reilly’s poster on “The History of Programming Languages.”
Today, we’re at another turning point in the history of information technology. The era that Backus and his contemporaries helped create, the programmable computing era, is being superseded by the era of cognitive computing. Increasingly, computers will gather huge quantities of data, reason over the data, and learn from their interactions with information and people. These new capabilities will help us penetrate complexity and make better decisions about everything from how to manage cities to how to solve confounding business problems.
But, in order for cognitive computing to take hold, scientists will have to re-architect nearly every aspect of computing spanning—silicon, systems, storage, and software. In the process, a new programming paradigm must be developed, corresponding to the one underlying FORTRAN. I’m proud to say that the SyNAPSE team at IBM, with help from university scientists, has taken a first step toward inventing just such a new programming model. We call our concept corelet programming.
If you want to learn more about the era of cognitive computing, download a free chapter of Smart Machines, a book by IBM Research Director John E. Kelly III, at the Web site of Columbia University Press, http://cup.columbia.edu/static/cognitive.
We are developing this technology for the SyNAPSE project, a multi-year initiative funded in part by the U.S. Defense Advanced Research Projects Agency. The goal is to create cognitive processor chips and systems, inspired by the function, low-power, and compact size of the mammal brain, which can enable a host of next-generation cognitive software applications. See examples of potential cognitive apps in the video.
The creation of FORTRAN was hugely ambitious; SyNAPSE is a hugely ambitious project in its own right. We have envisioned an entirely new computing architecture—a network of neurosynaptic processor cores, which, like the brain, is modular, parallel, distributed, fault-tolerant, event-driven, and scalable. Earlier, we demonstrated individual neurosynaptic processor cores, the key building blocks, using IBM’s silicon technology. In addition, we have simulated a network of billions of such neurosynaptic processor cores—reaching the brain’s scale of one hundred trillion synapses—on IBM’s largest supercomputer, Sequoia, at Lawrence Livermore National Lab.
Our architecture is radically different from today’s prevalent computing architecture. Trying to adapt existing programming languages to it is like trying to force a square peg in a round hole. A new approach to programming is needed.
Enter the corelet model. It’s a high-level description of a software program that is based on re-usable building blocks of code—the corelets. Each corelet represents a method for getting something done using the combination of computation (neuron), memory (synapses), and communication (axons) on individual neurosynaptic processor cores along with inter-core connectivity. Each corelet hides or encapsulates all details except external inputs and outputs.
Corelets are like LEGO blocks. Small individual corelets handle simple functions. When combined, they create new, larger corelets that aggregate functions and add new ones while hiding the underlying component corelets. In this way, the programmer can write large and complex programs using existing building blocks. Using this model and the programming language for executing on it, it will be possible for programmers to produce a large quantity of efficient code with relatively little effort and for people who are not programming experts to create sophisticated cognitive applications. That’s much the same effect that FORTRAN had on the computing world in its early days.
Along with the new architecture and the new programming model, we’ve created significant software and hardware technologies that represent a vertically integrated technology ecosystem—a new foundation for a new era of computing.
John Backus is rightly celebrated for his tremendous accomplishments at the dawn of the programmable computing era. Yet as John’s career proceeded, he came to recognize some of the limitations of the computer architecture laid out by mathematician John von Neumann in the 1940s—upon which his programming model and, indeed, all of modern computing, is based. In the lecture John delivered when he received the prestigious Turing Prize in 1977, he coined the term “the von Neumann bottleneck” to describe the inefficiencies of that architecture. In 1979, he wrote, “I now regard all conventional languages (FORTRANs, ALGOLs, their successors and derivatives) as increasingly complex elaborations of the style of programming dictated by the von Neumann computer. … It is unfortunate because their long-standing familiarity will make it hard for us to understand and adopt new programming styles which one day will offer far greater intellectual and computational power”. (John died in 2007, at age 82.)
Today, a worldwide, interdisciplinary effort has begun to invent a new, non-von Neumann architecture for computing that complements today’s computers. This will be essential for the emergence of cognitive computing. My team’s architecture and corelet programming model are important contributions to this effort. Ultimately, the march of time will decide whether our inventions will have the same kind of catalytic impact on the new era of computing that the work of John Backus and his team had on the last one. But, the best way to predict the future is to create it.
Front row, left-to-right: Davis Barch, Pallab Datta, Sue Gilson, David Peyton, Kumar Appuswamy, Brian Taba, Norm Pass, Wendy Belluomini, Nitin Parekh; Middle row, left-to right: Ben Shaw, Andrew Cassidy, Paul A. Merolla, Jeff Kusnitz, Arnon Amir, Dharmendra S. Modha, David J. Berg, Alexander Andreopoulos; Back row, left-to-right: Chris Hanson, Steven K. Esser, Emmett McQuinn, Rodrigo Alvarez-Icaza, Bill Risk, Myron Flickner, John Arthur, Bryan Jackson. For more photos of the IBM SyNAPSE team around the world, see my blog.