By Dr. John E. Kelly III
IBM Senior Vice President and Director of IBM Research
When I was a child, my father worked at General Electric’s research lab in Niskayuna, N.Y. I would visit and watch him tinker with vacuum tubes—light bulb-like devices that were used to direct electrical current in all sorts of gizmos, from radios and TVs to radar and computers. At the time, I didn’t fully understand what he was doing, but those visits inspired me to study science and, ultimately, to get degrees in physics and materials engineering.
I later came to understand that I had witnessed one of the great transitions in the history of technology. While my dad was showing me vacuum tubes, other engineers at GE’s lab were experimenting with the vacuum tube’s successor, the transistor, which ultimately ushered in modern electronics and personal computing. Those core technologies enabled computers that could be programmed to perform a wide variety of tasks.
Today, we are at the dawn of another epochal shift in the evolution of technology. At IBM Research, we call it the era of cognitive systems.
This is a big deal. The changes that are coming over the next 10 to 20 years—building on IBM’s Watson technology–will transform the way we live, work and learn, just as programmable computing has transformed the human landscape over the past 60+ years. You could even call this the post-computing era.
Notice, I don’t use the term “thinking machines.” That’s because I don’t want to suggest that cognitive systems will think like humans do. Rather, they will help us think and make better decisions.
How do we define the era of cognitive systems? It helps to compare it to what came before. The tabulating era began in the 19th century and continued until the 1940s. Those mechanical devices were used to organize data and make calculations that were useful in everything from conducting a national population census to tracking the performance of a company’s sales force. The programmable computing era emerged in the 1940s when scientists built the first electronic programmable computers. Successive generations of computing technology enabled everything from space exploration to the Internet.
Cognitive systems are fundamentally different. Traditional computers, which are still based on the blueprint that mathematician John von Neumann laid out in the 1940s, are programmed by humans to perform specific tasks. Cognitive systems are capable of learning from their interactions with data and humans—essentially continuously reprogramming themselves. Traditional computers are designed to calculate rapidly. Cognitive systems are built to analyze information and draw insights from it. Traditional computers are organized around microprocessors. With cognitive systems, it’s about the data and drawing insights from it through analytics.
Because of these changes, the machines of the future will do much more than compute. They will be able to sense, learn and better predict the consequences of actions. In the years ahead, machines will cull insights from the vast amounts of information being gathered to help us learn how the world really works, and make sense of all of that complexity, and provide trusted advice to humans—whether heads of state or individuals trying to manage their careers or finances. Computing intelligence will become ubiquitous and pervasive. Increasingly, computers will offer advice, rather than waiting for commands.
This new era of technologies is essential to fulfilling IBM’s goal of using technology to help create a smarter planet—to make the world work better.
We need cognitive systems because recent developments in business, society and technology require new capabilities. The emergence of social networking, sensor networks and huge storehouses of business information create a seeming overabundance of information that some call Big Data. Systems are being asked to find patterns and draw conclusions, often in near real time, from huge quantities of information—and in situations where precise answers are hard to find. At the same time, in fields ranging from retailing to healthcare to government, the individual increasingly stands at the center. People are newly empowered with information about how the world works and able to express themselves in powerful new ways. They’re also becoming increasingly decipherable via the cloud of data that surrounds them and the digital exhaust they leave behind wherever they go. Through data analytics, we can know each other much better.
In addition, we need cognitive systems because some of the fundamental building blocks of traditional computing are crumbling. For instance, consider what’s happening with silicon-based integrated circuits. Today’s microchips are the electronic brains in everything from refrigerators to space ships. However, because of the laws of physics, we are no longer achieving improvements in the performance of microchips that we are accustomed to, and need, using traditional methods. We need to invent new materials and new chip architectures.
As I mentioned before, Watson represent the beginning of the great shift. The system was built by scientists using Von Neuman-based computing hardware—a cluster of 90 servers with a total of 2,880 processors. But Watson’s DeepQA software represents the paradigm shift. It uses a combination of natural language processing and machine learning to understand questions, search for answers in huge databases, and present possible answers rated by its confidence in their accuracy. Watson learns from experience.
Created as a so-called grand challenge by IBM Research, Watson caused a stir last year when it defeated two past grand-champions at the TV quiz show, Jeopardy! Now, IBM scientists are working with experts in healthcare, financial services, government and other domains to create versions of Watson tailored to their needs.
IBM embarks on grand challenges like Watson in efforts to reach ambitious goals with five- to 10-year time horizons. The challenges require significant breakthroughs and, if successful, would represent a revolution in the way we work, create and live our lives.
IBM has long been at the forefront of major advances in technology. IBM was a leader in tabulating machines when four predecessor companies combined in 1911. In 1944, IBM engineers worked with scientists at Harvard University to build an early electromechanical computer, Harvard Mark I, the first machine that could execute long computations automatically. It was the dawn of the computing era. Since then, IBM has been in the forefront of one important computing advance after another.
Just because we’re moving to a new era doesn’t mean the old one is going away. Far from it. Programmable systems will be around for years to come. But the center of innovation is beginning to move to cognitive systems.
We’re on the leading edge of a technology transformation that promises to utterly transform business and society once again. But even though IBM has a broad portfolio of technologies and expertise, no single company can handle this sort of thing alone. We look to our clients, university researchers, students, government policy makers, industry partners and entrepreneurs to take this journey with us. Welcome, all, to the era of cognitive systems.