You name it. The faculty members at Carnegie Mellon University who are connected with the Center for Sensed Critical Infrastructure Research (CenSCIR) are busy applying smarter-planet technologies and thinking to practically any system of physical infrastructure. Now, in connection with IBM, the organization’s leaders are creating a physical place to serve as sort of a clubhouse for researchers and organizations that want to tap into their brain power.
The IBM Smarter Infrastructure Lab, announced today, is going to be a 1,000-square-foot facility within one of the the university’s buildings. It will be equipped with engineering workstations, 3-D displays, a telepresence set up, massive data storage capabilities, and access to powerful clusters of number-crunching computers. “Here, people can organize and visualize their work. It will be a showcase for what we do,” says James H. Garrett, Jr., the co-director of CenSCIR and head of CMU’s civil and environmental engineering department.
Instrumented and interconnected objects like refrigerators, freezers or even coffee bars are allowing building managers to operate profitably.
Following is a guest post from David Bartlett:
Companies can measure and control energy consumption in ways previously impossible, using the combination of low-cost sensors, controls, robust wireless mesh networks and ubiquitous access to the internet. That’s the gist of a deal IBM announced today with Tridium Corp., a division of Honeywell. Tridium makes sensors and the software found in a huge variety of commercial devices and structures.
Back in June I mentioned how Coventry was running the worlds first city-wide Jam to open up a conversation with residents and business to find innovative ways to make the city smarter.
A month on and 2,000 posts later, IBM and Coventry are teaming up to make the ideas raised in CovJam real and transforming Coventry over the next 30 years.
Healthcare is a complex system with many moving parts, including vast networks of doctors, patients, hospitals, clinics, pharmacies, insurers, medical equipment … and the millions of pieces of data, images, prescriptions, documents, and other information that get exchanged every day.
Click on the image to open the picture story
In Namibia, they have a completely different understanding of the phenomenon of “mobile banking” than we do in the United States. For the past two years, the First National Bank of Namibia has been dispatching small trucks–really, banks on wheels–to make the rounds of far-flung settlements in the vast Kalahari Desert, in southeastern Africa. The vans are equipped with computers and connected to the bank’s network via a satellite data communications hookup. It’s the old bookmobile idea transplanted to Africa and banking.
This intriguing bit of news comes from Stephen Lloyd van Rhyn, the bank’s head of information technology, who late last year installed an IBM mainframe at the bank’s headquarters. van Rhyn spoke at yesterday’s launch of the zEnterprise mainframe in New York City. Thanks to the reliability of the computer, he said, “we can set up accounts for bushmen in the Kalahari.”
Karl’s note: The following is a guest post by Jim Porell, Distinguished Engineer and System z Evangelist.
System zEnterprise – the new server on the block, but in doing so, taking the best of many servers and putting them together as a “System of Systems.” Its goal is to make a business smarter. Let’s look at how.
Moore’s law has demonstrated that annually each server platform should get faster and cheaper on a regular basis and to that end, most servers, IBM and otherwise, have done just that. From a customer point of view, many have built silos of operations. For example, transaction processing on a mainframe, data warehouse on UNIX servers and web portals on PC servers. Across a business, data is copied regularly, there are multiple operational domains and there could be many, many servers, eating up valuable floor space, energy and administrative personnel.
Big Iron never dies. Forty-six years after the first IBM mainframe models were introduced, our company is launching a new generation of the machines today in New York City. The zEnterprise series offers the kinds of performance you’d expect: The top-of-the line machine is equipped with 96 powerful processors running at a blazing-fast 5.2Ghz, together capable of executing more than 50 billion instructions per second. In an era when PC servers run tens of applications simultaneously in virtualization mode, this model can run run more than 100,000. It’s like a computing cloud in a box.
A key element of the launch is IBM Unified Resource Manager, a software innovation built into the systems that integrates a mainframe with Unix and PC blade servers as if they’re a single machine, with all of the security and reliably of the mainframe. We believe that in the not-too-distant future, the modern data center will no longer be a vast array of different types of devices and chunks of software but, instead, will be best understood as a single computing system, encompassing processing, memory, storage, networking, and all of the software and services that go with it. Conceptually and operationally, it will be one large machine. Unified Resource Manager is an important step in that direction.
This isn’t just some fancy technology trick that we’re doing because we can. The world of business computing is in the midst of a profound shift, driven by a convergence of forces. Digital intelligence is being injected into the world’s physical systems through pervasive instrumentation and global interconnectivity. That’s generating an exponential increase in the volume, quality, and speed of data. At the same time, doing business is growing in complexity and the pace of business has quickened. Companies are under intense pressure to respond to the expectations of a new generation of young people raised on the Internet, the rapid emergence of new markets, and intensifying competition.
To deal with all of these developments, enterprises need to become smarter–gathering more and better information, making sense of it, and acting wisely and immediately on what they learn. Continue Reading »
California fruit grower Sun World International isn’t among the giants of agribusiness, but it punches above its weight class in global markets thanks in part to its use of business intelligence software. In fact, the Bakersfield, California-based company is one of the most pervasive users of data analytics that we’re aware of–in everything from farm operations and finance to sales and marketing. It’s also got an executive dashboard for tracking key performance metrics. “The notion of pervasive performance management is held up as an ideal, but there are few companies that actually do it. This is one of them,” says Tony Levy, a product marketing director in IBM’s Cognos business unit. Sun World International’s main suppliers of business intelligence software are IBM and Applied Analytix.
The company’s heavy reliance on data began five years ago after it was purchased by a private equity firm that brought in new management and insisted on improved performance. “These days, we ask questions, understand the numbers, and, most importantly, do something,” says Gordon Robertson, vice-president of sales and marketing.
From its 12,000 acres of land in California, Sun World International sells table grapes, stone fruit, peppers, and water melons worldwide. It also breeds its own varieties of plants and licenses its genetic intellectual property to other growers. Its brands including Superior Seedless grapes, Black Diamond plums, and Honeycot apricots. It employs about 7,000 people in the fields.
Editor’s Note: The following post by Stephen L. Sams, vice president of site and facilities services for IBM, underscores the need for CIOs to more effectively manage growth in their data centers. If data centers are allowed to grow organically, CIOs can find themselves adding unnecessary resources, increasing the power demands and carbon footprints of their data centers beyond the needs of their business workloads. This post helps CIOs understand the importance of building a modular and flexible data center for more energy efficiency now and in the future.
How do you build a data center to last 20 years when information technology is changing every 2 years
In the new economic environment, uncertainty, volatility and complexity seem to be at an all time high – and they are still rising. Business processes are becoming more interconnected and global. Standout CEO’s are focused on how to manage in a more complex environment by creating value through new perspectives, deeper insights and more information. For CEO’s and their organizations, avoiding complexity is not an option — the choice comes in how they respond to it.
CIOs can play an important role in the enterprise by developing a vision of innovation enabled by IT. How well you manage your data centers to raise the return on investment of IT infrastructure, to expand the business impact of data center operations and to make innovation real determines the level of your success. Since data centers are long-term and somewhat static investments – needing to last 20 years while the technology inside changes every 2 to 3 years – it becomes an imperative that you plan strategies to be able to react to dynamic changes.
IBM’s data center family features innovation around a modular approach which helps solve three key ways to design a smarter data center.
One of the notable messages to come from IBM’s quarterly earnings report today was the strength of demand for technology in developing markets. Revenues in our growth markets represented 20% of overall revenues and they’re growing faster–expanding by 14% compared to 2% for overall revenues.
While the lion’s share of the demand comes from large, fast-growing economies such as China, India, and Brazil, the entire developing world is starting to take advantage of the transformative potential of information technology. Africa, too.
Consider Namibia. Late last year, the First National Bank of Namibia bought and installed an IBM mainframe computer. That may sound like overkill until you realize why the bank went that route and what it plans to do with the big machine.
Namibia is a desert country in southeastern Africa, with a population of just 2 million. Previously, the bank, which is the largest in the country, had its computing done by its parent company in neighboring South Africa. But that caused problems: outages and latency. So the bank decided to buy its own machine to better serve its network of 50 branches and 200 ATMs. “You’re in a developing nation, so your infrastructure isn’t as sound as elsewhere, but you still want to provide superior service to your clients,” says Stephen Lloyd van Rhyn, the bank’s Head of Information Technology. He says the mainframe has done the trick: “It’s super reliable, like the old diesel engines. It just runs and runs. And it’s energy-efficient, too.”
So far, van Rhyn says, the bank is only using about 10% of the capacity of the mainframe, but that’s going to change. He’s gradually moving additional computing applications to the machine, with the goal, potentially, of achieving a bank in a box. At the same time, the parent company, First Rand Ltd., is expanding in neighboring countries. Currently, they have operations in Botswana, Zambia, Lesotho, Swaziland, and Mozambique. They’re considering using the mainframe in Namibia to support the expansion strategy.
What FNB is doing illustrates the potential for bringing advanced computing capabilities to places that have very little now. Some of southern Africa’s economies are perking up, and computing can help sustain and even accelerate that momentum. The goal: A smarter Africa.