The world is in the early stages of a major shift—from the programmable computing era to the era of cognitive systems. Today at IBM Research, we’re convening our second-annual Cognitive Systems Colloquium. We’ll be hearing from some of the smartest people in the tech industry. Please return throughout the day for frequent updates. And join the discussion at #CognitiveComputing.
9:10 Zach Lemnios, vice president research strategy and worldwide operations:
We’re here to bring together researchers, clients, students, young entrepreneurs. We want to highlight the work of the past year and look at the challenges before us, and help to build an ecosystem to drive innovations in cognitive computing. How do we scale up this enterprise—how do we create ways for people to use these systems in ways that are very easy to use.
9:20 John Kelly, SrVP IBM Research: Augmenting Intelligence in the New Era of Computing.
It has been just one year since IBM Research staged its first Cognitive Systems Colloquium, yet a lot has happened. IBM launched its Watson Group to commercialize cognitive technologies. We put $1 billion behind it.
IBM Research executed its first major reorganization in nearly 20 years—allocating about one third of its professional staff to cognitive science and big data analytics. That’s about 800 to 1000 people
We put our brains and our money where our mouth was.
Also Research committed to invest $3 billion to advance silicon and post-silicon chip development. And we took our cognitive technologies to the cloud, so these new capabilities can be put to use anywhere in the world—such as the countries of Africa.
We also expanded our partnerships with academics. Our ecosystem is really taking shape. We have dozens of universities and several hundred development partners. It’s happening globally.
IBM is determined to lead the way into a new era of computing—but what we’re trying to do here is to build a community, an ecosystem supporting innovation.
The past 12 months brought something else into sharp focus. For decades, computer scientists have sought to advance the science of artificial intelligence, which is defined by many people as the effort to develop computers to match the intelligence of the human brain.
At IBM we have a different point of view. We talk about augmenting intelligence rather than artificial intelligence. It’s a distinction that really matters.
Throughout history, humans have created tools to overcome our limitations and augment our capabilities. Cognitive systems are the next frontier in augmenting human intelligence. In a world of big data that is growing beyond our ability to manage optimally, cognitive systems will enable humans and machines to collaborate to accomplish things that neither people nor computers could do as well on their own.
Today, we’ll do a lot of talking about how to do this–how to augment our capabilities. We’re trying to address many more of the right-brain activities, versus just computation. This is not trivial. we need to move from English to multiple languages. We need to move from one line answers to full dialogue. We need to create systems that can understand images. We have great challenges ahead of us, but when we get there we’ll have systems with incredible capability.
Here’s Guru Banavar, VP of Cognitive Computing in IBM Research, talking about how we think about A.I.
To learn more about the new era of computing, read John’s book, Smart Machines: IBM’s Watson and the Era of Cognitive Systems.
10:20 Jeff Hawkins, CEO, Numenta: What the Brain Can Tell Us About the Future of Computing.
I use the term machine intelligence to describe what we’re working on.
I’ll start with an analogy. Back in the 1940s, people were building dedicated machines for certain problems, and others said we should build more general computing machines.
In the 1950s, these things settled out. We settled on universal computers, which are flexible but aren’t necessarily the best for every use. They were scalable. It led to the computer revolution we have seen.
Now we’re in another period of messiness. We’re trying to settle on a paradigm.
I believe we’ll settle on systems that are based on the brain’s neocortex. These are learning systems. They’re the most flexible solutions. They can scale. We know this because our own brain is this way.
Numenta has two goals. 1) Discover the operating principles of the neocortex. 2) Create technologies based on those principles. We’re not trying to build a brain or anything like a human.
The cortex knows nothing when you’re born. It has to learn. It learns by studying patterns that come from the senses. It builds a model of the world based on this evidence. It understands, and then it generates behavior—like how I’m speaking now.
It’s a sensory-motor model of the world.
The neocortex is a sheet of cells, in a human about the size of a dinner napkin. (He unfolds a paper napkin) This is us.
The regions of the cortex are organized as a hierarchy. Four layers of cells. The neurons are organized in mini columns.
Most of the learning occurs through the formation of new synapses.
This is the system we want to understand. Can we understand in detail how the neurons and synapses work?
Our theory is hierarchical temporal memory. It’s how we recognize patterns.
We think each layer in the cortex implements a different type of temporal memory. They include attention, motor and motor inference and high-order inference.
We’re working on all the layers, and making progress. Once we figure this out we can go and build brains.
We’ve created some products based on what we have learned so far. One is Grok, which is based on HTM. It’s an application for spotting problems in servers in data centers. We spot and predict anomalies.
We can also spot anomalies in financial trading.
We’re not trying to add social media data to it. We can combine the streams of data and the analysis and cross-reference them, to discover relationships, causes and effects.
A bunch of companies are developing tools based on our theories and technologies. One is cortical.io. They do natural language processing. We’re getting close to how language is really processed in the brain.
You can run all of these applications on the core HTM code, the algorithm. You just change the data you apply the system to. You don’t have to change the algorithm.
We’re working on image classification the way the brain does it.
We’re working on goal-oriented behavior—robotics and smart bots. You’re moving through the data and focusing on the end game.
All of our algorithms are documented. We believe in research transparency. We even post our daily research code, so you can look at all the messy stuff we do.
We collaborate with IBM, with DARPA, with little companies. Anything that works, we’re open for it. We just want to make this happen.
I’ll end with a story. 21 years ago I gave a talk at Intel. It was their management meeting. I talked about the future of personal computing—it would all be about mobile devices. I suggested that Intel could capitalize on it.
Afterwards, I sat with Gordon Moore and other senior execs. They didn’t believe a word I said. The conversation got awkward. They asked what the applications would be. I knew simple things, like calendar and address book, but I didn’t know what all the apps would be. Three years after that we introduced the Palm Pilot, and three years later the Treo, the first smart phone.
Today, we’re at the same turning point. We’re switching to another paradigm. It’s about machines that learn. They’re based on the principles of the neocortex.
People ask me what the big applications will be. I don’t know yet. It’s like the calendar and the address book. We can’t fully imagine yet what will come.
But I’m sure of this: Twenty years from now machine intelligence will be driving this industry in so many ways.
Q: Airplanes don’t look or act like birds. They don’t have flapping wings. Why do you think computers will operate like brains.
Jeff: The Wright brothers studied the principles of flight. The same thing applies here. We’re looking at the principles of how the brain works.
If we want to build a cognitive system the only example we have is the brain. Why would we look anywhere else.
And when you look a the brain you find principles, common architectures that spread across different modalities.
At some point we can throw away the brain. We’ll know enough. We’ll just do our own thing. Until then, let’s learn from the brain.
Hawkins, a serial inventor and serial entrepreneur, was an early pioneer of pen-based computing and founder of Palm Computing and Handspring. After taking a keen interest in how the brain works he published a book, On Intelligence, where he laid out his theory of human intelligence.
He co-founded Numenta based on the idea that the brain is the best example of an intelligent system and provides a roadmap for building intelligent machines. Numenta seeks to discover the principles underlying the brain’s neocortex and to use those principles to create learning algorithms.
Here’s Jeff describing what Numenta does.
11:45 Manuela Veloso, professor, Carnegie Mellon University: Symbiotic Autonomous Robotics.
I’m on a quest for autonomous intelligent robots—which can sense, perceive, understand and act autonomously. We call them CoBots, collaborative robots.
We’re putting A.I and robotics together.
I’m also looking at how these robots co-exist with humans.
We’re developing robots that perform tasks for humans. They perform tasks such as going places, delivering messages, transporting objects, escort people and help us conduct video conferences.
We created soccer robots. They were a team. I always think about multiple robots. We also optimize the use of multiple robots. We schedule tasks.
There are basic skills that are difficult for them, such as stopping at the right place. The science behind it is beautiful.
They’re equipped with depth and distance sensors—laser range finders and the Kinect technology. I credit my student Joydeep Biswas. He works on the autonomy, which is a combination of localization and navigation.
When I come to IBM I saw to myself, where are the robots. There are a lot of cell phone. They’re great. But they don’t move.
Robots are a third species. When will have have humans, animals and robots working together.
The robot has a map of the space it operates in. It has a history of observations and action. And it can estimate what will happen if they act in a certain way.
When the robots are moving, they’re constantly calculating. Where am I? They have to operate in real time. There’s an enormous amount of data from the sensors, and they’re moving constantly and recalculating.
We use algorithms to manage this.
First the robot samples the images. It defines a plane. It samples more. It repeats many times. It creates a 3-D image. It matches the image to the map.
These service robots are data collecting robots. They are data collectors . They’re much better at this than anybody in the world.
They generate very active maps. They can create maps that measure humidity, temperature, wi-fi signals and other conditions.
We are up to 890 kilometers our robots have traveled so far. The goal is 1000 kilometers. They are extremely robust.
I have been at CMU for many years, and we’ve had many robots. But in the past they were always chaperoned. These are autonomous. Recently I had one robot follow another so it could video it.
When you come to visit me, you’ll be escorted to my office by a CoBot.
No matter how much we try to have these machines exist with humans, they have many limitations. They miss things.. They fail to understand things. These things keep me up at night.
They don’t have arms. I didn’t buy them arms. They can’t push elevator buttons, so they can’t go everywhere on their own. They can’t open doors.
One day I decided that it was okay for a robot to have limitations. It was a breakthrough. We changed the paradigm. Why make robots that can do everything humans can do? We started the principle of symbiotic autonomy. When they need help, they’ll ask for it.
We humans aren’t self-sufficient either.
I send them. They’re autonomous. When they need help, they ask for help. They learn to ask for help.
After that, when we planned the processes and capabilities, we included the asking -for-help option.
The robot waits by an elevator and asks “Can you help me and press the elevator button.” Then the robot goes in. And it asks the presen to press the button of the floor it wants to go to.
We are creating intelligent machines that can’t do it all for themselves, but they know when they need help.
Sometimes, nobody helps. Nobody comes along. Then the robot sends out an email. “I have been waiting for 5 minutes. I need help. Come and rescue me.”
They now navigate not just based on the shortest distance, but on where they will be able to get help if they run into trouble.
This is what autonomy is about. It’s not about building machines that can do it all. They’re capable of doing some things and of recognizing when they can’t do something, and asking for help.
This is how they’ll be able to travel 1000 kilometers.
The last part is we can talk to robots, and they’ll learn from it. They need to be able to understand this language nightmare.
We give them instructions by speaking to them. We have a conversation with them where they ask us questions so they understand what we want them to do.
They update their knowledge base through this process.
They can search the Web when they run into a limit to their knowledge. If you ask them to get them a cup of coffee, hey can find out what coffee its and understand that coffee is often in the kitchen, then head off for the kitchen to fetch coffee.
In the kitchen they can ask somebody to put coffee in their basket.
After that they know that when somebody asks for coffee, they know where to get it.
I can’t have the robot grow an arm, but I can have it learn. Its knowledge can grow.
Imagine a day. Why aren’t we selling robots that help you clean your house or cook a meal. We’re not there because we don’t accept the fact that robots won’t be able to do it all.
A robot company could have a call center, and the robot could call to ask for help in accomplishing tasks.
The robot could call and ask for help in turning on the dishwasher remotely.
The task will get done. We won’t even know how they do it.
They can so some things. They rely on other sources when they can’t.
And they move…. which is beautiful.
Q: Will robots every be able to do everything humans can do?
Manuela: I can speak 8 languages. I can scramble eggs. I can prepare and give a lecture. I can to it all. We don’t even know how we do it. Robots won’t be able to do it all.
This National Science Foundation video explains Manuela’s CoBot project. She and her students create Collaborative Robots that navigate through buildings autonomously and learn from their experiences. There may be roles for them in offices, museums, hospitals and schools.
2:10 John Underkoffler, CEO of Oblong Technologies: The Future of the User Interface.
First assertion: One of the highest leveraged activities we can undertake as computer scientists is user interfaces. It was largely ignored in early eras of computers.
In the era of cognitive computing, we won’t be able to move forward fast enough if we’re still mousing around and using the keyboard.
Question: What UI does your business use?
It doesn’t mean anything because we have been stuck with the Xerox PARC/Apple UI for more than 30 years.
We need a new UI.
Think of computing in terms of here and there.
In traditional personal computing, everything you do was right in f font of you at your fingertips. The data and the UI were “here.”
With today’s mobile devices, the Interface is here but the data is there, up in the cloud or somewhere else.
We have to bring the data and the UI together again but in new ways.
What we’re talking about is spatial computing.
We’re talking about using large spaces, large display screens, and the ability to command and interact using gestures—a new gesture language.
Our early spacial technologies were used in the movie, Minority Report. The characters used data-driven UIs to predict harvest the dreams of children to predict crimes before they happened.
The 50 million viewers of the movie were essentially our mass focus group for us.
At Oblong, we created a spatial operating environment, a next generation OS called G-Speak
Five assertions that underly Oblong’s value proposition:
We care about space. We can use spaces and hand and body gestures to interact with data. We have to teach machines about space.
Assertion 2: We need to build UIs that activate the parts of the brain that create the feeling of exhilaration.
Assertion 3: It’s not just about space and gesture. We need to incorporate time as well. Ultimately we have to create a spatio-temporal awareness in computing systems.
We’ll have multiple modalities for interacting with computers.
(He shows time-oriented financial data presented graphically in a five-sized hexagonal space.)
Assertion 4: We’ll have to more of the power of human language to describe new things more precisely. What you write as a programmer is what you mean at a high level.
Assertion 5: We’re moving into a world of plurality. Programs will run on many screens and and in many situations, on many different kinds of computers. Programs have to be written so they can adapt to different situations.
Ultimately you get a shared pixel workspace—with many systems interacting, many people involved, and many ways of human’s interacting with the workspace.
You can bring whatever devices you have into the room and plug them int the system.
One person’s tool becomes available to everyone in the room.
Environments like this will become the necessary front end for cognitive computing. You need environments that are this rich and enable multiple users.
It’s a new sort of human architecture.
We want to drive forward a future that captures human intent, and does so on human terms, rather than trying to abstract your desires and project them into digital machines.
More than anything, these are human systems.
3 Bob Kahn, CEO of Corporation for National Research Initiatives, a non-profit organization that promotes research that keeps the US competitive. Along with Vint Cerf, he was one of the architects of the Internet.
The technology wasn’t the most important thing about the Internet. The essential ingredient was all the social structures that allowed it to grow and evolve. I’m talking about the governance bodies, such as the Internet Society, ICANN, the ICCB, and others. The idea was to empower everybody out there to do the management, rather than trying to manage everything centrally.
When we started working on the Internet, most of the government folks really didn’t know or care, so we didn’t have to ask anybody for permission.
Later, government people woke up and asked “Who’s in charge?”
My answer is that nobody is in charge of a lot of important things, such as the world economy.
We started with a real problem. We had a lot of computers and networks, but they didn’t interoperate.
The internet didn’t happen all at once. There was a gradual shift from the ARPANET to the Internet.
DARPA started it, but it wouldn’t have happened if NSF didn’t pick it up and run with it. This was in the 1980s. They got other parts of the US government to get on board, and help fund it.
Also, serendipity played a major role. We didn’t know when we started that the PC would come along, that AT&T would be broken up and that the US would deregulate the telecommunications industry.
In 1993, we opened it up to commercialization and put MCI mail on the Internet. That was the first commercial application.
What’s wrong with today’s network interfaces? They’re not smart enough; cognitive abilities are needed. Very large transfers are still problematic. Security is mainly absent.
The answer is a digital object architecture. It will support interoperability based on identifiers. Will also improve security.
There are major challenges for cognitive systems, as well.
We need to understand what a cognitive system is and what it contains.
Cognitive systems need to understand their external environment.
They need to learn from experiences and to be taught.
They need to understand anything unusual that’s going on, and need to be able to report about it.
They need to distinguish between short-term requirements and long-term requirements.
We have to figure out how to build systems that are less brittle, that are not so constrained.
Just like the Internet, it has to grow organically. You need basic rules of the road and then you need to set it free.
You have to build a community and an ecosystem. It won’t happen overnight. You have to have a plan and the parties that are participating will have to stick with it.
I can be done using today’s internet, but has to be able to adapt in the future.
You have to be able to show people what cognitive computing will do for people so they’re in favor of it.
You need a vision of where you want to go.
You put all these things together and you can succeed at it.
Bio: While a leader at the U.S. Defense Advanced Research Projects Agency (DARPA) in the 1970s, he launched the national networking technology program that later gave birth to the Internet. He was also the co-inventor of the TCP/IP protocols that are foundational to the Net. For life’s work, he has received many of the most important awards and recognitions in the computing domain, including the Turing Award and the U.S. National Medal of Technology. Here’s his full biography.
Here’s Bob talking about the past, present and future of the Internet.
4:45 Santiago Quesada, director of exploration and production technology at Repsol SA.
Humans have explored for oil for 150 years. And while we’ve learned a lot about how to do this, today’s data to determine where to drill is still full of noise. Engineers and geologists interpret the data with models, but the truth is that many times they don’t know precisely where the oil is.
Repsol’s cognitive computing initiative with IBM, which began last May, is to apply cognitive systems to industry because the process has to change. Put yourself on a rig, in the middle of the ocean and ask: how do you drill a well that will cost $400M that will go at least 3 kilometers under of water and through 3 kilometers of rock. The statistics say success is less than 25 %. It’s a risky business.
We also need to find tomorrow’s oil differently because we have a responsibility. If you take a look at any analysis that is predicting the future of energy demand, over the next few decades, oil and gas companies still have the responsibility of providing more than 50 percent of the energy supply of the world. Cognitive computing will help us do this more efficiently.
We’re transitioning from a traditional oil and gas company to an energy company. We’re working with any technology that can contribute to satisfying the energy needs of the world.
We work in smart field, materials and corrosion, nanotechnology, subsurface illumination, earth modeling and enhanced oil recovery.
But we have to go farther. We have to look into emerging technologies. We’re talking about big technology, new computing systems—including cognitive systems.
For a non-expert like me the real beauty of the cognitive system is to enhance the way our technical teams, and management, can make decisions.
…our financial investment decisions, our operational decisions.
Our vision of the Repsol/IBM collaboration:
We have created a joint team, including our experts in oil and gas exploration and production. We’re prototyping tools that we’ll use in specific decision making situations.
We’re combining computers and humans to overcome the challenges that are difficult for people to handle on their own.
One of our prototypes addresses how we make decisions concerning asset acquisitions; the other focuses on optimizing production.
We know this will be difficult.
But it must be done. We have to deal with a huge amount of data. It’s impossible to be handled by humans.
We want to not only handle the data but to control this data. Today, we don’t control data. Data controls us. We have to change this.
With IBM, we’re working in user studies—psychology, the way we design our workflows. We want to create and evaluate hypotheses.
I don’t have a definition for cognitive systems. But I see it as an ecosystem, where the power of the humans is enhanced and magnified by computers.
We want to make it a reality, not just a vision. We’re combining talents of the joint team. We have the passion for this. Hopefully we’ll see important results very soon.
Let’s invent the future together.
Here’s an article that describes in more detail the alliance between IBM and Repsol.
There’s a big gap between technology and what patients need at their bedsides. We have to change that.
The biggest discovery in biology was the cracking of the genetic code—sequencing genes. It cost $3 billion to sequence the first person’s genome.
But since then we have brought a tremendous amount of computing power to bear on this problem, and we have reduced the price dramatically.
We have to address needle-in-the-haystack issues.
We want to be able to sequence in one day a single nucleotide in a child. Modern genomics has to be more like physics.
The origins of disease are much more complex than the problems facing physicists.
We need to change our paradigm in genomics.
We have to put basic science together with clinical actionable change.
In the Genome Center, we put about $40 million into sequencing machines. so we can sequence about 18,000 individual genomes per year for about $1000 per person.
This is a genuine revolution in science.
We need scalable solutions for personalized medicine-to really understand the cause of an individual’s disease. That’s my we’re working with IBM. We want to make a scalable solution.
Genomics is incredibly complex. We just don’t understand very much biology.
We’re living in a world of sparse distributed data and we need help—from cognitive computing.
We do the sequencing of tumors. Then we bring the power of IBM Watson to analyze the data.
The New York Genome Center and IBM are collaborating to use IBM Watson technology in combination with advanced gene sequencing techniques to pinpoint the most effective treatments for cancer patients. Just yesterday, IBM announced that Cleveland Clinic will use the IBM Watson Genomics Analytics solution that has resulted from the collaboration.
Here’s an article about the collaboration between IBM and the New York Genomic Center.