Instrumented Interconnecteds Intelligent
January, 12th 2009
12:07
 

How the Deep Web and Petaflop Power will put the smart in Smarter Planet

NewIntelligence_icon_265x260
Smarter Planet means intelligent infrastructure for our energy grids, transportation systems, food supply chains and healthcare networks. Ok, got that part.

It is also about trillions of devices and objects connecting to the internet and changing the way billions of people live and work. Check.

But the real revolutionary and truly transformative frontier may be the one that is this week’s focus: a New Intelligence, the leap in knowledge and insight that will come from new ways to process the large and diverse sea of information that smart systems and the “Internet of things” will generate.

For individuals, these new smarts could be manifest in things like digitally enhanced memory, what some are calling a “life log” that captures, stores, and organizes an electronic collection of everything you do or would want to save: conversations, images, transactions and interactions of every kind, all searchable and as accessible as your biological memory. Augmented memory made it into IBM’s  latest Five in Five report, which focuses on five technologies that may change the way we live in the next five years.

This new intelligence also promises to serve your health and wellness. Not only could all of your lifelong healthcare-related data securely collect in a kind of online health data bank, but new computing and sensing built into your clothes and environment would continually monitor the state of your health. Together these resources, along with the growth of personalized genomic medicine, could keep you healthier by catching any early signs of illness so that doctors could then proscribed advanced preventative care.

For businesses, the New Intelligence will offer richer and more realtime ways to run a company, including better ways to forecast market trends, make smarter decisions and have a more precise awareness of every facet of a firm’s operations and performance.

For society, this ability to understand and probe the world at a fundamentally new and deeper level will enable scientists and researchers to hunt more effectively for solutions to our most pressing problems, such as clean energy, climate change and water scarcity.

This idea of an evolutionary jump in the IQ of the world may seem on the one hand fanciful, and on the other, a bit abstract. But several of the new Smarter Planet ads that launched this weekend bring the concept back down to Earth, like this one, called Smarter Petaflop.




The world may be becoming a more complex, and information-intensive, place, but at the heart of this vision for New Intelligence are two basic ingredients.

Unprecedented new sources, and quantities, of information
New processing power and programs to turn raw data into a more valuable commodity: insight and intelligence that humans can use.

For the sake of brevity, I’ll just touch on one example that illustrates the two sides of the new intelligence coin.

Data Mining the “Deep Web”
Almost anyone can sense that the amount of information in the age of the Internet grows relentlessly. In fact, the Web we see or come across in searches is literally the tip of the info-iceberg. This “surface Web” is about 167 terabytes of data, while the larger, total amount of information on the Internet, the so-called Deep Web or Deepnet, is estimated to contain about 91,000 terabytes.  We are already in the era of Big Data.

Powering Up the Petabyte Age
A petabyte is 1,000 terabytes, or 1M gigabytes. To put that in perspective, Facebook stores about a petabyte of user photos, or roughly 10 billion pictures. Fortunately, the power to process this deluge is keeping pace.

That information arms race is being enabled by supercomputers like IBM’s Roadrunner, housed at the U.S. Department of Energy’s Los Alamos National Laboratory, which in 2008 broke the “petaflop” barrier, or more than a quadrillion calculations a second.

RoadrunnerIBMsupercomputer_540x356

Indeed, the Web itself is becoming a vast platform for computing and data processing, through new techniques like grid computing, which harness the data crunching power of idle computers and devices around the world, or cloud computing which turns the fabric of the Web itself into a processing layer.

One of the ways in which all this digital horsepower can extract insight from massive sets of data is to build very sophisticated models that can simulate and predict the solutions to complex challenges like how protein molecules fold, understanding weather patterns and environmental conditions, or optimizing logistical nightmares like global airline routes.

Down the road, even more powerful models, based on emerging frontiers such as quantum computing or cognitive computing (which seeks to mimic the power of the human brain) will open up new ways to squeeze even more useful intelligence from our ever-growing sources of information, often in close to realtime when that new knowledge may be most valuable.

Finally, it may be less sexy than the long-term promise, but some of the most immediate uses for the new intelligence are going to be in the operational nuts and bolts of business, fueling the productivity and efficiency gains that will empower enterprises to recover from the current economic crisis and get back to growth and innovation.

You can sample some of the posts around New Intelligence that we’ve collected as a “channel” on the Smarter Planet Tumblr site that complements this blog.  And you can learn more about the New Intelligence on http://www.ibm.com/think.

Of course, all of us are smarter than any of us. So what does this New Intelligence model mean to you, and where do you see it going?

Jack Mason
IBM Global Business Services, Strategic Programs
jkmason@us.ibm.com






Bookmark and Share

Previous post

Next post

8 Comments
 
November 1, 2011
4:24 am

hello!,I love your writing so so much! share we keep up a correspondence more about your post on AOL? I require a specialist in this area to resolve my problem. Maybe that’s you! Looking ahead to peer you.


Posted by: Artisan Bangle
 
January 28, 2009
2:03 am

I’m not an IBMer or a programmer, but I would call myself a data superuser, spanning business, government, and the social web.
Since the article asked, I’d like to offer the layman’s perspective. The technology that drives new intelligence is light years beyond what the average end user can understand, and contrary to most reasoning, that gap is only widening. Partly this is due to generational differences…you simply can’t teach all old dogs new tricks, although some are certainly willing to try a trick or two. Pair this with the next generation of developers who have grown up with the technology and user experiences that are driving this change, and you have a very large understanding gap to bridge. The importance of this gap isn’t always evident, but we’ve seen one tech development after another adopted to the point of complete integration into the younger generation’s lifestyle (i.e. cell phones and facebook), and barely acknowledged or chronically misunderstood within older generations. The age old problem is that those making the business decisions do not understand the technology, and those that do understand it are not making the decisions.
I’m a younger worker trying to get a much older team to start using business intelligence software…It is frustrating to keep beating my head against the wall because even the president of my company is clueless about what a server is and thinks “online storage” means “unsecure”. The unfortunate reality is that if they don’t get with the program here, the company will be outpaced by its competitors and losing business within the next few years…our customers simply won’t be able to work with our outdated systems.
I’m all for moving intelligence forward, but lets do it responsibly. You can’t just throw out a new technology that takes the place of 200,000 workers doing data mining and expect them all to just go quietly into the unemployment line. We have to bring people along, not leave them behind, otherwise IBM and everybody else won’t have too many businesses left to buy their products/services.


Posted by: romabit
 
January 13, 2009
1:32 pm

Max, you raise some interesting points. I have a question about this comment, “You need to read Gigerenzer, Taleb, Damasio, Minsky and a few others to realize that intelligence is linked to making good decisions on limited information. More information turns everything into noise.”
I think the noise issue is really key. But can’t more sophisticated computing models and algorithms – designed by human intelligence – help manage and filter noise?
What is the assurance that the limited information a decision is based on is the right information? Clearly, a small sample size, if skewed, necessarily overestimates the basis of a decision, right? Curious to hear your perspective.


Posted by: Adam Christensen
 
January 13, 2009
12:11 pm

Max:
I agree that there are many challenges to the future of predictive analytics and more intelligent data mining in some of the areas you point to, or that even the most advanced technologies …
(in fact, I am reading Jeff Hawkins fascinating book On Intelligence, which details some of the debate and limitations in AI and intelligent machines. But look at what he’s now working out with his new company Numenta!)
Still I can point to a couple of examples that underscore where breakthroughs in computer science and information technologies are leading to whole new avenues of inquiry, knowledge and useful application of new kinds of insight.
For example, IBM Research just announced a new microscope based on MRI principles that is 1000 times more sensitive than other methods, and will enable scientists to gain new intelligence on the structure of matter.
Here’s the link to the videoclip
http://www.youtube.com/watch?v=AAA4FGKCBik
On a different front, IBM and TD Bank Financial Group announced in early 2008 that they were exploring the use of “stream computing” software system, running on a Blue Gene supercomputer with a new kind of software architecture. The goal is to examine thousands of real-time information sources to support financial services companies.
http://insidehpc.com/2008/04/02/td-bank-partners-with-ibm-on-stream-computing/
These aren’t magic bullets, but they are part of larger pattern of progress that I think is leading us in the right direction.
You are right that there are real limits to what machine intelligence can deliver. And cognitive computing is in its infancy.
And you are also right that simply more data and more processing power — which is my great oversimplification of the nature of this new intelligence for a general audience — doesn’t by itself get you to anything like a true new kind of intelligence.
But I disagree with your assessment that nothing has changed in computer science, or that we aren’t actually on the brink of a sea change in what is possible in terms of analytics, business intelligence and simulation & modeling.
Moreover, I am very appreciative of you engaging the dialogue. Our social, human and interactive exchange may be the most important kind of new intelligence we can harness.


Posted by: Jack Mason
 
January 13, 2009
11:07 am

Jack, I am truly sorry to rain on your parade! What you present is a marketing pitch with little real benefit to show. I am too an ex-IBMer, though I did leave 21 years ago. I did try to sell ‘OfficeVision’ and the paperless office back then, anyone remember? This ‘New Intelligence’ is the same thing. Grid computing is still in its infancy, Quantum computing can perform additions on simple numbers and cognitive computing is an idea and no more. I wrote a post on it some time ago: http://isismjpucher.wordpress.com/2008/11/24/ibm-cognitive-computing-research-versus-papyrus-uta/
The proposition that gathering more data, and crunching more numbers, will produce more knowledge and better decisions is an illusion. The data gathering is inaccurate, the data correlation is an assumption, and even where you find correlation is does not present CAUSATION. The mathematics used are guesswork. Yes, in the area of classical physiscs, where we build machines that perform repeated operations where we can optimize and tune the models to be used, all this is useful, but in the business world where many people take individual decisions all the maths is useless.
I am sure that the Black-Scholes future utility model means something to you and see into what financial maelstrom it has taken us. Our crisis is not just a Black Swan, but by the illusions encoded in the model.
You need to read Gigerenzer, Taleb, Damasio, Minsky and a few others to realize that intelligence is linked to making good decisions on limited information. More information turns everything into noise. And even when you get good information then you will find that the context and the related interpretation of the user is as important as the data themselves. Why does everyone ignore Shannon? Data as such are meaningless.
If there is a way to create cognitive computing, its functionality has to learn from human interaction and human decisions, just like humans learn from others. Cognitive computing on masses of data is NOT human! AI failed because it totally missed the point that all human decisions are emotionally intuitive.
I have designed and patented a ‘learn-from-the-user’ or transductive agent and we implemented it for the Papyrus Platform. It does work astonishingly well and it works best with limited data.
IBM’s ‘Deep Blue’ was only able to beat Kasparov in 1997 after grandmaster Joel Benjamin created the winning strategy. It was not the computer who won.
IBM is still playing the same old game …


Posted by: Max Pucher
 
January 13, 2009
9:40 am

Great post Jack, really made me proud as IBMer! Honestly think that there is much more potential for raising awareness on how we can translate the new intelligence to day-to-day business.
My opinion is that although there are many plans, forecasts and IBM empowers significant research projects, people need to see the new intelligence as something that it’s already happening and something we could benefit from NOW.
Is not like we need to wait for a technology to allow us to improve how we live, work and do business. That may be in a particular case but the most of it is already available. So we just need to reach for it and ask advise from companies like IBM on how we can improve our “old intelligence”
just a fast thought :-)


Posted by: Silvia Mihailescu
 
January 12, 2009
2:45 pm

Daniel:
Glad you liked it. And I think you offer some additional context on http://lightlycaffeinated.com/ , namely that this new intelligence is probably going to be a kind of diffuse, distributed artificial intelligence. Not exactly the kind of non-human intelligence we thought AI was going to deliver, but something different.


Posted by: Jack Mason
 
January 12, 2009
2:36 pm

Jack,
I think all of these advances in New Intelligence hinge on AI and cloud computing advances. Hopefully I’ll have time to gather my thoughts on this and write more but your article is fantastic.


Posted by: Daniel ostermayer
 
Post a Comment