By John E. Kelly III
It’s amazing for me to recall that in 1980 when I came to IBM Research out of graduate school, engineers were striving to design chips containing 100,000 transistors–those tiny electronic switches that process and store data. Today, it’s common to put five or six billion transistors on a sliver of silicon.
This remarkable achievement is the fulfillment of a prediction made in 1965 by industry pioneer Gordon Moore: that the number of components on a chip would double every year for the foreseeable future. He later amended the time period to 24 months. His predictions, codified as Moore’s Law, have come to symbolize the seemingly inevitable march of technological progress–the ability to make all sorts of electronic devices faster, smaller and more energy efficient.
While Gordon’s prediction proved to be more prescient than he could have imagined, today, 50 years later, the chip industry is no longer able to clear the high bar he set, due largely to limits imposed by the laws of physics. To put things bluntly: Moore’s Law is hitting a wall, and that collision holds significant consequences for business and society. Unless scientists and engineers come up with bold new approaches to chip architectures and materials, technological progress will slow.
To accelerate progress, we need to invent the next switch.
Today’s switch is the transistor, a precursor of which was invented by a trio of scientists at AT&T Bell Labs in 1947. The goal at IBM is to lead the way in the search for the next switch by achieving critical scientific and technological breakthroughs and to incorporate those advances in commercial computers.
The transistor was one of the most significant inventions in modern times. It gave rise to wave after wave of innovations in the computing world spanning from the corporate computers of the 1950s to today’s smartphones and social networking.
Today, new capabilities are emerging that promise to make computing even more essential to individuals, businesses and society as a whole. Big data analytics, cognitive computing and other advances are enabling better health care, more efficient energy systems, autonomous vehicles, better weather predictions, and more effective management of everything from supply chains to financial markets and from cities to the global economy.
Yet, unless we accelerate progress in our search for the next switch, much of that potential will remain untapped.
The good news is that scientists at IBM and around the world are exploring a wide variety of concepts that could extend the life of today’s technologies, and, eventually, have the potential to replace them on the cutting edge of computing.
New Pathways for Technology
The workhorse of the digital economy is the integrated circuit on a chip, made primarily of silicon and capable of storing and processing data in the form of ones and zeros using tiny on-off switches–the transistors.
One of the most promising avenues of inquiry for extending today’s conventional chip technology is the field of compound semiconductors. Scientists are exploring the idea of using indium gallium arsenide in combination with silicon–with the goal of increasing the performance of the chips while decreasing energy consumption.
Further out in the future, carbon nanotubes have the potential to greatly improve the performance of transistors. By using this atomic-scale material in switches, the devices could be smaller and electrons could move 10 times faster than conventional semiconductor materials.
Chips based on carbon nanotubes might be the last hurrah of the traditional transistor switch, however. To achieve major improvements in data processing we will have to change the way we think about computing and invent new architectures for chips.
Two of the most exciting potential solutions are quantum computing and neuromorphic chips. Quantum computing has the potential to harness the principles of quantum physics to produce computers that could solve certain types of problems orders-of-magnitude faster than conventional computers. These could be in areas such as computer security, database sorting, and materials discovery. Neuromorphic chips could enable computers to mimic the human brain by using vast networks of tiny interconnected devices that react to changes in the flow of electrons rather than turning on and off like a transistor. They could be used to draw insights from images, sounds and other sensory information.
Much of the progress in science and technology comes in predictable increments. We improve on what we’re already doing. Moore’s Law is a measure of that kind of innovation. But there are times when incremental innovation isn’t sufficient; when society requires great leaps forward. This is one of those times.
When I landed at IBM years ago, the silicon semiconductor revolution was still in its infancy. Now, we’re at another turning point, and, having witnessed and participated in the explosion of digital technology, I can foresee even bigger and more rapid changes ahead. Computers will become our partners, augmenting our intelligence, helping us to be more successful in our private and professional lives, and driving society forward. So let’s get on with inventing the next switch. Let’s take that big leap and change the world again.
To learn more about the future of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.