One of the most intriguing elements of the new era of cognitive computing is the development of brain-inspired technologies. Those are technologies that mimic the functioning of the neurons, axons and synapses in the mammal brain with the goal of interpreting the physical world and processing sensory data: sight, sound, touch and smell. Today’s IBM Research Cognitive Systems Colloquium at IBM Research – Almaden is focusing on this realm of the cognitive computing world. Please come back for frequent reports and updates, and join the conversation at #cognitivecomputing.
9:15 Jeff Welser, director of IBM Research- Almaden
Cognitive computing addresses questions where there is no simple yes or no answer but many shades of gray. The sheer amount of data makes this shift necessary.
We’re advancing the Watson group and the algorithmic work, but also looking at analog information. This is where the SyNAPSE project comes in.
We aren’t talking about replacing human effort and human thought, but about a partnership between humans and computers.
Watson is the left-brain, Von Neumann stuff. The stuff we’re talking about today is the right-brain stuff—addressing the senses. We want to understand why you guys see in this area. This is just the beginning.
9:30 Harlheinz Meier, University of Heidelberg and co-director the EC’s Human Brain Project
Neuromorphic Computing Comes of Age
I point back to the early days of computing, with John Von Neumann. We’re at another turning point today with neuromorphic computing. Like them, we use the devices that are available. They used vacuum tubes. We use CMOS chip architectures when we work on neuromorphic technology.
It’s important to try different approaches. It’s too early to narrow down to one path. In the Human Brain Project we have too complementary concepts: the many-core processor system and the physical model system.
I want to focus on the physical model approach. We have a series of rationales. We use mixed-signal architectures, we’re driven by architecture, not devices; we use high neuron input count; and we focus on configurability and scalability. The objective is rapid circuit exploration. Our chips look like other chips, but they’re laid out in a pattern that looks like a tennis court. The synapse are in the middle and the neurons are around the sides, like the boundaries of the playing area. Today we have single-chip systems with 200,000 neurons. So we have built chips. Now we are building the systems. We have about 20 of them under construction in Heidelberg. Our energy use is 14 orders of magnitude better than conventional supercomputers for the same job. But still we’re 10,000 times less efficient than biology.
We can compute 12 orders of magnitude better than today’s supercomputers. This is important because when you’re doing experiments, you want to be able to run many of them—so run them fast. We’re working on making the systems useful for non-experts.
We have created a language, tools and editors, and we have remote access via a portal. There are many applications. One is reverse engineering biological systems—such as the olfactory systems of insects. We’re running closed-loop experiments. We work on the localization of sound sources. We’re working on stochastic computing.
Our roadmap reaches out to 2023. We’ll provide algorithmic synaptic computing, multi-compartment neurons, and other advances. We are going public. It’s time for these systems to leave the lab. It’s important that people start to play with these things. We’ll sell boards with tools beginning next year.
10:00 Dharmendra Modha, IBM Fellow and head of IBM’s SyNAPSE project
The goal of our project was to make a computer with the computational capability of a brain in a space the size of a shoe box.
We built models of the mammal brain to understand how to build a network. In 2012, we put together a simulation at the scale of 100 trillion synapses, the same as the human brain, with simple math models of neurons and synapses. it required 1.5 million processors, and yet the simulation ran 1500 times slower than real time. If it were made as capable as the brain, as fast as the brain, the simulation would consumer 12 gigawatts of power, enough to power LA and New York City combined.
The underlying tech of the brain is biological, organic. What we have today is a CMOS substrait. So we’re using CMOS architecture until new computational architectures become available. We have created network of non-Von Neumann neuro-synaptic chips. in 2011, we demonstrated chips on the scale of a worm brain. They were the building blocks of the architecture we’re working on. It could do simple thinks, like play Pong. These were the humble beginnings.
The next idea was to tile the cores, so you have limitless possibilities for expansion. Three months ago we unveiled a chip that has 1 million neurons and 256 million synapse. It consumes just 70 milliwatts. It’s capable of 86 billion synaptic operations per seconds per watt. It’s a supercomputer the size of a postage stamp consuming the power of a hearing aid battery. We now have a system with 16 chips, with 16 million neurons. We can move the computer to the sensor, to the edge of the network, to process data as it is gathered.
We had to develop a different kind of algorithm. Instead of long strings of instructions, they’re short pieces of code.
To make the technology available outside we have developed a single chip board. It’s called Katherine the Great. We created a new training program called SyNAPSE University, to help people understand the technology.
So starting from the conception of the project in 2004, we have come long way. The key messages is that the technology, the tools and the teams are ready to begin in-market work and collaboration with university researchers. Our project is about developing technologies that are low power, small size, realtime performance, for sensor rich applications spanning the spectrum from mobility to scalability.
We want to unlock applications with tremendous benefits to society.
10:30 Michael Hawrylycz, Investigator, Allen Institute for Brain Research Cell
Type and Computation
We build large scale platforms for investigating gene expression and other modalities. We make all of our work and data available online.
We map differentially expressing genes. Then you can study the genes. You can rank genes in the genome based on their reproducibility in the human brain. Only a minuscule fraction of the genes expressed in the brain have been studied and targeted with drugs.
We study connectivity—in one case we have mapped the mouse brain. We put together large scale models and simulations that try to predict the functioning of the brain. If you’re going to do large scale simulations of the brain, where do you need to lie on the simplified verses biologically realistic models? These are the questions we’re dealing with. We have done graphic representations of large numbers of neurons and synapses in action.
We want to learn about the brain for two reasons. First, we want to make life better. The other reason is to advance cognitive computing. We define intelligence as what we do, how we think. Can we expand on it using machines?
Using data from our work, we’re trying to understand realistically what’s happening in the human brain. We’re using computing to do that. Others will use this understanding to build more capable computers.
10:45 Richard Cytowic, author of Synesthesia: A Union of the Senses Synethesia’s
Challenge to Brain-Inspired Computing
Synethesia is the coupling of the sensing. Some people see numbers and letters as colors. People taste when they see things, or hear things. Four percent of the population experiences this phenomenon. It’s a trait, like having blue eyes, because there’s nothing wrong with it.
It’s more common in artists, who excel at making metaphors.
But elements of synthesia are common more broadly in the population. For instance, if you color white wine so it looks like red wine, for many people it will taste and smell like red wine. Brain-inspired computers will have to have the ability for different sense-based streams of data to intersect with each other, the way our senses relate to one another. They will have to be multimodal.
Synesthesia helps us understand how perception, metaphor and memory are related to each other.
11:20 Vijaykrishnan Narayanan, Professor, Penn State
Visual Cortex on Silicon
This concept and ambition has been around for decades. We have learned so much about the brain and we have made progress in machine vision. Our group is using the brain as inspiration for computing. We use it to engineer systems. In Indian mythology, the third eye is really powerful. It shows compassion. But it also can spew fire, if needed. I’m going to concentrate on the compassion part.
I want to create systems that can walk with me and do perception. We want to leverage this knowledge, and augment it, and take it to many applications.
We used brain-inspired models for the chip used in the Neovision Challenge. The problem is that visual clutter rapidly degrades our ability to recognize objects. We can use computer vision to improve recognition in clutter. We use top-down, spacial and object-oriented techniques. It’s not just spacial locations, but also objects and their features.
We’re looking at domain-specific vision processors. It’s a large number of simple processors. We want to create hardware that can capture the coding, structural and adaptive leaning aspects of the visual cortex. We call this neuromorphic hardware.
We’re looking at phase-change memory as a key element.
One application is to help visually impaired people go shopping. We’re working with 24 high school students to get analysis of what they would find useful—what kinds of verbal help they need. As they walk along the store aisle, the system recognizes the products and it learns how items are organized in the store. It can shop for things on the shopping list. It can guide the person through the store. It’s vital to have real-time response. Also there will be applications in automated weapons systems. That’s the third eye spewing fire.
Our work gets the students excited. It gets them involved in STEM. I urge you to share your SyNAPSE University material with universities and even with high school students.
11:50 Tobi Delbruck, Co-Founder, iniLabs; Professor, ETH Zurich
My dream is to get something neuromorphic into mass production. Once you have done that a lot of innovation can happen.
My focus is on the retina.
Look at the horse-in-motion sequence from 150 years ago. Check it out on Wikipedia. It’s the idea of a sequence of still pictures, which you analyze one-by-one to learn about motion. A sequence of still pictures is the language of machine vision. It’s a sampled sequence of frames.
We have a new sensor, a dynamic vision sensor. (shows traditional video capture on stage) You have to process all the pixels from every frame. With dynamic vision, we sample just the pixels that are different in each frame, so you don’t have to process the same information over and over again.
The active pixels are sending out their own location in formation. Their addresses. They’re spiking. They’re based in change in brightness events.
What is it good for. It’s good for fast, sparse, cheap vision
12:20 Rajit Manohar, Professor and Associate Dean, Cornell Tech
All of these neuromorphic systems run on asynchronous circuits.
Traditional electronics are synchronous. It’s digital clocked logic. It developed that way because hardware resources were expensive. You wanted to maximize the productivity and the use of the components, so you use the clock to schedule things. You do useful work with every clock tick.
But unfortunately since the new algorithms don’t look like this, you have a mismatch. So you retrofit. People retrofit all their logic.
The problem with the time-based approach is the system waits for work to show up, through messages. So a lot of the system is idle while it waits.
Instead, in neuromorphic computing, we use self-timed logic. the neurons and synapses are the compute elements. They produce or process spikes.
TrueNorth, the SyNAPSE project chip, is very power efficient. It’s a hybrid between spike-driven and clock-driven computation.
The spikes go through the system to the location where they’re directed, but they’re not delivered until it’s time to use them. The algorithm sets the timing.
The algorithm is a graph, or map, representing neurons, synapses and their properties.
The chip is configurable for different kinds of applications.
12:45 Fei-Fei Li, Director, AI Lab, Stanford
A Quest for Visual Intelligence
What does it mean to be visually intelligent?
We run an experiment. We show people a lot of photos and ask them to write down what we see. But we put blank pages between them. Even through the pictures are flashed really fast. But you can understand what you see in the pictures.
Even at 500 milliseconds, half a second, people can tell stories based on what they have seen.
No computer today can do what humans can effortlessly do. They’re very very far from telling what they an see.
This wakes me up in the morning and gets me to work. Our dream is to make it possible for computers to understand what they see.
If we can enable them to see as well as humans or better, we can help so many aspects of our society.
Computer vision started in the summer of 1969. AI had already been born. A professor at MIT, Seymour Papert, had decided to solve vision one summer—with his students.
MIT has smart students, but they didn’t solve vision that summer. We’re still on the quest. Why is it so hard? Why did we miscalculate?
Measuring pixels is very different from understanding a scene—the task our brain has solved.
Plato talked about the Allegory of the Cave. In his story, prisoners were tied up, forced to look at a wall and see shadows on the wall. Their task was to figure out the story behind them by seeing the projection. This is how human’s see and interpret what they see.
Computer vision has done a lot of exciting things, but it still has a long way to go.
(shows picture of baby wombat) Google thinks its a gray blog against a red background. But we know it’s an animal and people from Australia know it’s a wombat.
I believe to solve the problem of computer vision we need three ingredients coming together: data, learning and knowledge. The three ingredients have to come together and interact with each other to make it happen.
Early computer vision scientists borrowed from psychologists. They used algorithms that captured idea that objects are made of parts and you bring the parts together and people recognize them. This was object recognition.
Later the field turned to three-d reconstruction. A lot of progress was made.
In 2000, scientists brought machine learning to bear on the problem. It changed how we do things. As a grad student, I read a paper, Realtime Face Detection, in 2001. It was a seminal article. Camera makers used the algorithm to recognize faces.
Look at the explosion of data on the Internet. Much of it is multimedia data. Pixel is the dark matter of the Internet.
ImageNet was a big breakthrough. it changed the way we understood deep learning technology.
What’s next? in my opinion, we have to look at knowledge again. Watson reminds us how important this is, and it’s important in computer vision.
We need knowledge graphs expressed in algorithms to make it possible for machines to reason like humans so they can see like humans.
My students have produced an algorithm that recognizes the wombat as a mammal. We’re making progress but we still have a long way to go.
2:15 Gill Pratt, SyNAPSE Program Manager, DARPA
Complexity vs Power in Natural and Man-made Systems
Think about nature, about trees. They’re plentiful. They’re constantly fighting for energy. Energy in nature is scarce.
A tree doesn’t seem to have a problem spreading seeds all over the place, and to have almost none of them turn into new trees.
It’s a dichotomy. Energy is scarce and complexity is almost free.
Thanks about that as we think about man-made systems.
NeoVision 2 was a program we ran before SyNAPSE. It focused on vision. It was based all on algorithms, with no new hardware—unlike SyNAPSE
We found we could get drastic reductions in power using neuromorphic methods. We used a lot more hardware, but not as often. We didn’t move the data around as much.
We asked could we do even better, and in SyNAPSE, we’re getting another order of magnitude of energy savings.
Robots today are about 100 times less energy efficient than people and horses. The same goes with the brain. So how do we make machines more energy efficient? The answer is to mimic nature.
The basic idea is it’s time for a new generation of computers that have a similar size, weight and energy consumption.
The brain only moves data around when it needs to. Most of it is idle most of the time. So it only uses energy when it needs to. Complexity is almost free.
That’s the principle underlying the SyNAPSE chip.
We must drastically lower the cost of complexity.
I want to challenge all of you to come up with near-term applications for TrueNorth, to show the world what it can do.
I run the DARPA robotics challenge. We’ll launch a new wave next summer. The focus is on dealing with disasters. In disasters, power and communications networks often breakdown, so robots have to be more autonomous. We want the robots to be autonomous as possible.
IT companies have been investing a lot in the cloud. There are already about 10 billion photographs on the Web. The cost of storage and communications are plunging.
Size, weight and power are at a premium. So who not have computer brains be located not in the robot but in the cloud?
The SyNAPSE project comes in when the robot can’t connect to the cloud, or knowledge about the physical universe is sparse. In those situations, the robots have to be autonomous. They’ll need to have their brains on board.
I want to challenge you to come up with more ideas for applications.
4 The SyNAPSE team
SyNAPSE Deep Dive
(14 members of IBM’s SyNAPSE team take turns presenting. Here are bits from some of them.)
Bryan Jackson: Because of the on-set of big data we need a new computing paradigm. It has to be low power and real time.
Driverless cars will need to gather 1GB of data per second and respond in less then 200 milliseconds. So you need an onboard brain.
Smartphones will have more than 10 sensor types, they need to react in 200 milliseconds, and they need to be always on. So they need a low-power processor.
Cameras used for security need to have continuous video streams, they have to be always on, and they must be able to react in 1 to 5 seconds for maximum value.
Rodrigo Alvarez-Icaza: Researchers from IBM labs around the world collaborated to create the TrueNorth chip and software.
We had to invent a completely new architecture. It’s non-Von Neuman. There’s no clock. It’s massively scalable. One TrueNorth chip can talk to another TrueNorth chip, so you can tile them and scale up fast.
The chip takes spikes as inputs—little packets of information deliverable in space and time.
We began by making a simulator, called Compass. We ran it on a supercomputer. You can get the same results whether you load your programmer into the simulator or the TrueNorth hardware system.
This approach made it possible to have our software programmers go ahead on writing algorithms while another team worked on designing the hardware.
John Arthur: Here’s how we program TrueNorth. In the core, we have neurons, which do the computation; axons, which are the spike inputs; and the synapses, which connect the axons with the neurons. It’s a classical neural network.
To program it, we set the synaptic weights and neuron parameters.
We will have systems with thousands and tens of thousands of cores, but we program them in the same way. In addition, because we have many cores, we have to set up the connections from neurons to axons on different cores.
Paul Merolla: We have training methods to take neurosynaptic cores and train them to learn from data.
Arnon Amir: To program the chip we created the Corelet programming language. Each Corelet performs a specific task. We then can connect and combine Corelets, and we can build a library of Corelets. Now we have hundreds of them.
Myron Flickner: We created a Corelet for composer recognition. We were able to identify the correct composer for music roughly 75% of the time.
Filipp Akopyan: We have developed a TrueNorth mobile development platform. We call it 1 million neurons in your pocket. (He pulls a small circuit board out of his suit pocket. It’s the size of a first-generation iPhone) It’s 70 millimeters by 125 millimeters. and weighs less than 100 grams.
We built a motion sensor and a pressure sensor right into the board.
We can embed it in cameras, smartphones and laptops. It could be used in cars.
It can take input from multiple senses at the same time and process them in realtime,
It could be used in delivery drones.
We envision this board having an impact on every aspect of our lives.
Jun Sawada: We can use the same high efficiency for going to big systems and still keep the total power consumption at a reasonable level.
We have a 16-chip board today. You can stack the boards together and the chips will talk to each other.
It would be an ideal platform to build a large scale neural network.
We envision a 4096 chip rack with 1 trillion synapses, consuming just 4 kilowatts of power. If we connect 96 racks we’ll have a supercomputer with 100 trillion synapses. That’s the amount of synapses we have in our brain.
Bill Risk: To help people who are interested in learning more, we created a program called SyNAPSE University. We started offering it an an in-person, onsite program. It’s hands on.
We started offering it just to IBMers, but have started putting some of the curriculum online in the form of books and videos.
When the single chip mobile platform is ready you’ll be able to access it remotely from your computer via the Internet.
5 p.m. Panel: Brain, Computers, Society and the Future
Moderator: Jim Spohrer, Director, Cognitive Systems Institute, IBM Research
(Here are some highlights)
Mark Anderson, Founder, Future in Review
I have spent my whole life predicting what will happen next in technology. It’s always based on patterns. When I look at this stuff today—I think you guys are underselling. It’s hard for human beings to see the real world. You have to drop your frames, so you can see the world properly. Our whole computing environment doesn’t do it now. But this new technology could do it. We’ll use the TrueNorth chip and other technologies to look out at the world and find out things we didn’t know. Discovery will be a major use for this. This is a huge step forward.
Horst Simon, Deputy Director, Lawrence Berkeley National Laboratory
Moore’s Law is done; it’s going into retirement. The big concern is power constraints. The theme that impressed me today is that we are at the end of the Von Neumann architecture. It’s an important point. When Von Neumann and Oppenheimer built the first computers they were for designing nuclear weapons. Today we’re doing all of these other things with machines that were architected for building bombs. We didn’t toss the architecture away because the gains from Moore’s Law made it possible for us to use an inappropriate architecture. Now we need technologies like TrueNorth to take us into the future.
Gary Marcus, Professor, NYU
One of the things I’m working on is a piece for the New York Times, Is the Brain a Computer. I have wrestled with this since I was a graduate student. Obviously it’s not a traditional computer. TrueNorth gives us another model. Exploring new computational architectures can help us understand how the brain works.
I think about the transistor, and what a scaling building block it has been over the decades. What do we see now—is there an equivalent in the neuromorphic technologies?
Andreas Andreou, Professor, Johns Hopkins University
The technology we see here is an element of a mix of processors—regular processors, FPGAs and this kind of processor. We’ll also see better coupling of memory and processor.
We hit a long dead zone in the ability of computers to deal with the real world. Several years ago, I saw a need for a pattern recognition processor. I didn’t know who would build it. It turned out to be these guys, Dharmendra and the team. This will be a day I remember—because the world needs a chip which by its nature recognizes the nature of the world.
Miyoung Chun, Executive VP, The Kavli Foundation. She organized the US Brain Initiative.
I want to turn the topic a little differently. Dharmendra pointed to where the funding came from to develop these technologies, pointing to three legs of the funding stool. He listed DARPA, IBM funding and university funding. I think it’s time for foundations to join and work jointly with the other legs of the stool. Also, we have to talk about ethics. We should anticipate what could go wrong. These new technologies will have great benefits to society. But we have to think ahead and prevent possible harm. We need to communicate with the public. They have to be warned and informed consumers.
Elon Musk is in the news warning about the dangers of computers. Part of it is ridiculous and some of it is serious. AI still has a long way to go. They’re very limited tools. The threat isn’t real yet. But it does make sense now to think about some of the implications. What do we need to do to keep a powerful agent from doing serious destruction? Some science fiction scenarios do turn out to be real. What do we need to do in advance to deal with that? We don’t haven an answer. We need to start thinking and talking about it.
I think it’s really more of an urgent issue than that. Things will happen sooner than we think. We need to have in place rules that make it possible for the scientists to go forward.
We all think about the Terminator and the robots of the future. The biggest danger is today, and it’s the surveillance state. See the documentary about Snowden. This technology put into surveillance cameras everywhere–it could make it possible for security agencies to see so much of what we do.Do they monitor whenever somebody puts up a poster on a wall? It’s something we have to start thinking about today.
Jayashree Subrahmonia, VP of Products, IBM Watson Group
We have the oncology solution. People ask, is my doctor now a machine? We characterize it as an advisor to a doctor. The advisor can help doctors make better decisions. The power of technology is here, but we have to take on these important questions.
To learn more about the new era of computing, read Smart Machines: IBM’s Watson and the Era of Cognitive Computing.