By Jean Noel Le Foll, General Manager, CFAO Technologies
Brazil, Russia, India, China, Turkey, South Africa and Mexico are the fastest growing markets for computer equipment, making up 14% of the global IT market. The regions increasing their IT purchases the most are the Middle East, Eastern Europe and Africa, according to Forrester Research. A growing list of companies in these emerging economies is relying on the IBM System z mainframe to build their infrastructures.
The Ministry of Senegal brought all of its import and export processes from across the country on-line with System z, and is now recovering 30% of Gross National Product, which amounts to two billion Senegalese francs in customs revenue every day. In the process, the Ministry increased the performance of its systems by 70%, reduced power consumption by 20% and cut operating costs by 30%.
Customs officers in Senegal and their partners now have real-time access to information across all of the country’s border checkpoints. They can check to see if the correct duty has been paid on shipments of goods coming through the country’s main border checkpoints This is a vast improvement over the Ministry’s previous system, which was limited to two border checkpoints. The Ministry of Senegal is using technology to apply critical information to boost the country’s economic growth.
My company, CFAO, also worked with the government in Cameroon to help them build their infrastructure on the mainframe. In Cameroon, the Cameroon Ministry of Finance is using a System z mainframe to help with smarter banking and modernize the payroll processes for government employees in the country. The new system is helping to increase the security of the Ministry’s payroll system and improve the efficiency of processes such as generating pay slips.
If you own a car in North America, you’re told to change the oil every 3,700 miles or six months. This applies whether you are living in Florida, driving peacefully to work, or living in Minnesota with frequent subzero temperatures in the winter. In Scandanavia, where I live, we change the oil in our vehicles less frequently because of concern about the environment.
But no matter what schedule you use, the point is that old-fashioned service manuals are not smart. Cars are used in different ways, and should therefore be serviced in different ways. And the same goes for any type of machinery.
Just because machines start out the same way doesn’t mean we should service them the same way. To determine how often we get vehicles serviced, we need to consider the environment in which the machines operate, and how they are being used. The trick, off course, is to figure out just that: where and how are they being used?
It all starts with collecting data. Sensors are becoming increasingly sophisticated. Using heat cameras, we can detect wear inside a ball bearing. Microphones can help us detect the slightest change in frequency of a motor and with accelerometers, which are small sensors that measure acceleration, we can record motion of robotic arms that will give away inconsistencies.
These sensors work much like the nervous system in our body. Each sensor on it’s own is somewhat useful, but when you start combining the sensory data from multiple sources along with statistics and previous recordings, you really start to leverage the potential. Feeling the ground tremble, hearing a train horn, and seeing that you are standing on train tracks, are of little value on their own, but combining the information might prove life saving. With such input, you know that taking one step to the side is smarter than running along the tracks.
This is what we call predictive maintenance. Measuring, in real-time, how machines are doing and combining it with statistics and knowledge to fix things before they break, not after. This gives customers the chance to plan for down time, and do repairs before faulty parts affects others. In many cases, they can limit repairs to a few dollars instead of thousands.
This can also be applied to the products already sold. A car manufacturer could put sensors in its cars, which would report on how the car is performing. This would give us a large dataset to find faults and errors, which would help evolve future products or make the servicing the cars smarter. In other words, letting the customer know that a part is about to break before it actually does.
On a smarter planet, we will stop treating cars — or machines — as a homogenous group. Since each one is used differently, it should be serviced by looking at the health of each part, and not when the booklet tells you it’s due for servicing.
By Harry van Dorenmalen
Chairman, IBM Europe
The first, Sequoia, is the world’s most powerful supercomputer, capable of calculating in one hour what would otherwise take 6.7 billion people using hand calculators 320 years to complete if they worked non-stop. It is installed at the National Nuclear Security Administration (NNSA)’s Lawrence Livermore National Laboratory in California.
The second is the first commercial machine, cooled by hot water, built for the Leibniz Supercomputing Centre in Germany. It will be used by scientists across Europe to drive a wide range of research − from simulating the blood flow behind an artificial heart valve, to devising quieter aeroplanes.
What’s impressive about these machines is not just their massive processing power alone, but they are remarkably energy efficient, too.
The Top500 ranking of supercomputers today recognized the Lawrence Livermore National Laboratory’s Sequoia as the fastest computer in the world. The computer, an IBM Blue Gene/Q, was designed to be extremely energy efficient. Like previous Blue Gene machines, it’s powered by low frequency and low power embedded PowerPC cores–in this case, an astonishing 1.6 million of them. Sequoia produces 16 petaflops of computation muscle. That’s 16 quadrillion operations per second. It’s an important stepping stone on the way to exascale computing–machines that will be 50 times as fast as today’s fastest.
Read a related post on the IBM Research blog.
When IBM unveiled its Smarter Planet agenda in late 2008, government and business leaders in Poland were intrigued, but the global financial crisis made it difficult for them to act on their positive impulses. Today, in spite of lingering concerns about the situation in Western Europe, the Smarter Planet concepts are starting to gain traction–especially with government leaders.
The Polish central government is launching an e-health initiative, a new citizen ID program and a new electronic tax filing system. “Smart is all about how to make the citizen’s life easier, safer and more ecologically sustainable,” says Anna Sienko, IBM’s general manager for Poland and the Baltic countries.
Poland is one of the fastest-growing economies in Europe right now, and business and government leaders are determined to stimulate growth through innovation. ICM, a research institute affiliated with the University of Warsaw, does its own research in everything from weather prediction to quantum computing but also provides computational power for other researchers throughout Poland. Here’s how ICM works:
By Andras Szakal
IBM US Federal CTO
A smarter government is more agile, more able to effectively respond to changing government needs and citizen dynamics. One of the best ways to improve the way our government works – both its operational efficiency as well as the services it provides to citizens – is through cloud computing.
Yesterday I participated in the Congressional High-Tech Caucus Cloud Task Force’s “Cloud Computing: A Primer” in Washington, DC as part of an industry panel which tackled issues critical to cloud utilization. The event was designed to help our legislators understand how to optimize IT and lower costs, reducing government waste. I was excited to be able to take this message to Congress, and appreciated the opportunity to join Rep. Michael McCaul (R-TX) and Rep. Doris Matsui (D-CA), co-chairs of the High Tech Caucus.
As citizens, there is a lot of reason to be excited about the promise of cloud computing to help our government operate more efficiently. We like to feel that our tax dollars are hard at work, and that maximum value is being squeezed out of every penny. Rapidly evolving advancements in cloud technologies in such areas as resource pooling, virtualization and operational automation must be considered to help transform and consolidate government data centers to ensure more effective use of resources and lower operational costs.
By Dr. John E. Kelly III
IBM Senior Vice President and Director of IBM Research
When I was a child, my father worked at General Electric’s research lab in Niskayuna, N.Y. I would visit and watch him tinker with vacuum tubes—light bulb-like devices that were used to direct electrical current in all sorts of gizmos, from radios and TVs to radar and computers. At the time, I didn’t fully understand what he was doing, but those visits inspired me to study science and, ultimately, to get degrees in physics and materials engineering.
I later came to understand that I had witnessed one of the great transitions in the history of technology. While my dad was showing me vacuum tubes, other engineers at GE’s lab were experimenting with the vacuum tube’s successor, the transistor, which ultimately ushered in modern electronics and personal computing. Those core technologies enabled computers that could be programmed to perform a wide variety of tasks.
Today, we are at the dawn of another epochal shift in the evolution of technology. At IBM Research, we call it the era of cognitive systems.
This is a big deal. The changes that are coming over the next 10 to 20 years—building on IBM’s Watson technology–will transform the way we live, work and learn, just as programmable computing has transformed the human landscape over the past 60+ years. You could even call this the post-computing era.
(We’ll discuss these issues on Twitter today from 4-5 p.m. ET. Join me (@angelluisdiaz) and Rackspace leaders by tagging your tweets with the hashtag #cloudchat (Twebevent makes it easy to participate). Feel free to send us your questions and comments using the hashtag.)
By Angel Diaz
Vice-president, IBM Software Standards
Cloud computing is changing the way we think about technology, and it’s no passing fad. Whether it’s consumers using the cloud to store music, startups turning to cloud to get up and running without huge investments, or big businesses and governments relying on clouds to make more data more accessible, cloud computing is changing how business and society runs, and opening up huge avenues of innovation.
Yet, as promising as cloud computing is, one of the biggest hurdles to widespread adoption is a lack of open standards.
For decades, info tech companies and their customers have been wrestling with one of the seemingly inescapable facts of the computing era: computing systems are designed to be either simple or flexible; but not both. It’s one of the central dilemmas of enterprise computing.
The solution to this problem has been a long time coming, but we’re now on the cusp of a new era when we’ll provide simplicity and flexibility in a single system. One approach is a concept we call expert integrated systems.