Attacking the Data Center Energy Crisis

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

For more than 15 years, Ken Brill has preached the gospel of optimizing data center availability. Now he’s evangelizing a more urgent message: data centers must reduce their environmental footprint or risk letting their spiraling power demands run away with our energy future.

Brill’s Uptime Institute serves Fortune 100-sized companies with multiple data centers that represent the largest consumers of data processing capacity in the world. A typical member has 50,000 square feet of computer room space and consumes “a small city’s worth” of utility power. For these companies, uptime has historically been the brass ring; their need for 24 x 7 availability has trumped all other considerations. That has created a crisis that only these very large companies can address, Brill says.

Basically, these businesses have become power hogs. They already consume more than 3% of all power generated in the US, and their escalating data demands are driving that figure higher. A research report prepared by the Uptime Institute and McKinsey & Co. (registration required) found that about one third of all data centers are less than 50% utilized. Despite that fact, the installed base of data center servers is expected to grow 16% per year while energy consumption per server is growing at 9% per year.

“During the campaign, John McCain talked about building 45 nuclear power plants by 2030,” Brill says. “At current growth rates, that capacity will be entirely consumed by the need for new data centers alone.”

Potential for Abuse

That might not be so bad if the power was being well used, but Uptime Institute and its partners believe that data centers are some of the country’s worst power abusers. In part, that’s because uptime is no longer the key factor in data center performance, even though data center managers treat it that way. Data centers used to serve the needs of mission-critical operations like credit card processing. Response times and availability were crucial and companies would spend lavishly to ensure perfection.

Today, many data centers serve non-critical application needs like search or large social networks. Uptime isn’t a top priority for these tasks, and spending on redundancy and over-provisioning is a waste if the only impact on the user is a longer response time for a search result.

The move to server-based computing as a replacement for mainframes has actually increased data center inefficiency. Whereas mainframes historically ran at utilization rates of 50% or more, “Servers typically operate at less than 10% utilization,” Brill says, “and 20% to 30% of servers aren’t doing anything at all. We think servers are cheap, but the total cost of ownership is worse than mainframes.”

Wait a minute: not doing anything at all? According to Brill, large organizations have been lulled into believing that servers are so cheap that they have lost control of their use. New servers are provisioned indiscriminately for applications that later lose their value. These servers take up residence in the corner of a data center, where they may chunk away for years, quietly consuming power without actually delivering any value to the organization. In Brill’s experience, companies that conduct audits routinely find scores or hundreds of servers that can simply be shut off without any impact whatsoever on the company’s operations.

And what is the implication of leaving those servers on? Brill compares the power load of a standard rack-mounted server to stacking hairdryers in the same space and turning them all on at once. Not only is the power consumption astronomical, but there is a corresponding need to cool the intense heat generated by this equipment. That can consume nearly as much power as the servers themselves.

The strategies needed to combat all this waste aren’t complicated or expensive, he says. Beginning by auditing your IT operations and turning off servers you don’t need. Then consolidate existing servers to achieve utilization rates in the 50% range. He cites the example of one European company that merged more than 3,000 servers into 150, achieving a 90% savings in the process. “People think this is an expensive process, but it actually saves money,” he says.

In the longer term, making data center efficiency a corporate goal is crucial to managing the inevitable turnover that limits CIOs to short-term objectives. The commitment to environmental preservation and power efficiency needs to come from the top, he says.

Uptime Institute has white papers, podcasts and other resources that provide step-by-step guides to assessing and improving data center efficiency. This is a goal that every IT professional should be able to support.

With Cloud Computing, Look Beyond the Cost Savings

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Back in the early days of data center outsourcing, some pioneer adopters got a rude surprise.  These companies had outsourced all or large parts of their data centers to specialty providers who bought their equipment, hired their staff and offered attractive contract terms that shaved millions of dollars in expenses in the first year.

The surprise was that the contract terms weren’t so that attractive once the outsourcer became embedded in the client’s expense line.  Customers found that nearly everything carried hefty escalator fees, ranging from unbudgeted capacity increases to software patches to staff overtime. But there was little customers could do.  They were locked into the contractor and the cost of unlocking themselves was prohibitive.

This story came to mind recently during a chat with Bob McFarlane, a principal at facilities design firm Shen Milsom & Wilke. McFarlane is an expert in data center design, and his no-nonsense approach to customer advocacy has made him a hit with audiences around the country.

McFarlane thinks the current hype around hosted or “cloud” computing is getting out of touch with reality.  Cloud computing, which I’ve written about before, involves outsourcing data processing needs to a remote service, which theoretically can provide world-class security, availability and scalability.  Cloud computing is very popular with startups these days, and it’s beginning to creep onto the agenda of even very large firms as they reconsider their data processing architectures.

The economics of this approach are compelling.  For some small companies in particular, it may never make financial sense to build a captive data center because the costs of outsourcing the whole thing are so low.  McFarlane, however, cautions that value has many dimensions.

What is the value, for example, of being able to triple your processing capacity because of a holiday promotion?  Not all hosting services offer that kind of flexibility in their contract, or if they do, may charge handsomely for it.

What is the value of knowing that your data center has adequate power provisioning, environmentals and backups in case of a disaster? Last year, a power failure in San Francisco knocked several prominent websites offline for several hours when backup generators failed to kick in. Hosting services in earthquake or flood-prone regions, for example, need extra layers of protection.

McFarlane’s point is to not buy a hosting service based on undocumented claims or marketing materials. You can walk into your own data center and kick a power cord out of the wall to see what happens.  Chances are you can’t do that in a remote facility.  There are no government regulations for data center quality, so you pretty much have to rely on hosting providers to tell the truth.

Most of them do, of course, but even the truth can be subject to interpretation. The Uptime Institute has created a tiered system for classifying infrastructure performance. However, McFarlane recalls one hosting provider that advertised top-level Uptime Institute compliance but didn’t employ redundant power sources, which is a basic requirement for that designation.

This doesn’t mean you should ignore the appealing benefits of cloud computing, but you should look beyond the simple per-transaction cost savings. Scrutinize contracts for escalator clauses and availability guarantees.  Penalties should give you appropriate compensation.  While you won’t convince a hosting service to refund you the value of lost business, you should look for something more than a simple credit toward your monthly fee.

If you can, plan a visit to a prospective hosting provider and tour its facilities.  Reputable organizations should have no problem letting you inside the data centers and allowing you to bring along an expert to verify their claims. They should also be more than willing to provide you with contact information for reference customers. Familiarity, in this case, can breed peace of mind.

Tips For a Greener – And Cheaper – Data Center

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

The EPA estimates that data centers eat up about 1.5% of all electricity in the United States and that nearly a quarter of that power is wasted.

As I noted last week, energy waste is one of the dirty little secrets of corporate data centers.  Add on to that the money lost due to PCs sitting idle overnight and the waste inherent in abandoned and underutilized servers and you have a lot of money just sitting out there waiting to be found.

Energy-saving is mostly a matter of common sense.   The simplest approach may be to start, literally, at the ground floor.  If you peer under the raised flooring in your data center, you’ll probably find pockets of cables clustered together. These could be inhibiting air flow.  By re-cabling and deploying vented floor tiles in strategic locations, you can cut energy waste with almost zero capital expense.  Many consultants now specialize in this area, and bringing in an expert can save time and money in the long run. This article tells the story of one company that saved him $1 million annually in power costs through this simple measure.

The next step is to analyze your server use to determining what can be shut down and consolidated.  One speaker at last fall’s Data Center World conference proposed a radical idea: if you don’t know who’s using a server, shut it down. Mark Monroe, Sun Microsystems’ director of sustainable computing, said that his group tried this approach and discovered that nearly 12% of its servers weren’t being used for anything.  The application owners had moved on and no one had bothered to shut down the application.

Consolidate the servers you’re using into one location and direct your cooling resources to that hotspot.  Use virtualization to pack more physical servers onto fewer virtual ones.  By some estimates, about 70% of servers in the data center are only supporting one application.  Utilization rates of less than 15% on single servers are not uncommon. These are ideal candidates for virtualization.

Air conditioning is responsible for the greatest energy waste.  The problem is that most data center managers don’t know where all their hotspots are, so they take a brute force approach and cool the entire data center to a certain level. The reality is that the hottest servers probably consume a minority of floor space.

While high-density servers can consume less power overall than the individual machines they replace, there’s no reason to structure your cooling plan around the needs of maybe 10% of your hardware. Several vendors now sell server racks that are optimized for cooling.  Also, the water cooling technique that was common in the mainframe days two decades ago is staging a revival as server consolidation comes back into vogue.

Crank up the heat

Once you’ve isolated your most power-consumptive servers, turn up the thermostat on the rest of them.  Most servers can operate perfectly well at temperatures of as much as 100° (be sure to check with your supplier before trying that, though) and each 1° increase in temperature can save about 4% in energy costs, according to Sun’s Monroe.

You should also become familiar with the EPA’s Energy Star initiative. This program sets standards for efficient energy use and publishes lists of products that meet them.  Needless to say, computers are a major Energy Star focus. Did you know, for example, the EPA estimates that enabling basic power management features that come with the Windows operating system can save up to $75 per computer per year? While there are legitimate reasons to leave PCs on all night at times, simple open source network tools can enable systems managers to shut down unused computers and still have the flexibility to power them on when needed. The Energy Star website has a list of software tools for remote power management as well as a power management savings calculator.

A somewhat more radical option is to outsource all or part of the data center.  While there are many factors involved in this decision, the potential energy savings of such a move shouldn’t be underestimated.  In the case of total data center outsourcing, contractors should be able to provide you with power savings estimates to factor into your calculations.  Amazon’s S3 storage service is one of many specialized offerings that are emerging. Amazon sells cheap off-site data storage. One of its appeals is that users don’t need to pay for — and cool — on-site storage area networks.

Most technology vendors now have green initiatives, and you should become familiar with what your key vendors are doing. For example, IBM has made its own IT operations a showcase of energy efficiency. In the course of consolidating 155 data centers worldwide down to just seven, it’s cut operational costs by $1.5 billion. This podcast tells more.

What are you doing to save energy in your data center? Write your suggestions in the comments are below.

Computer Industry Finally Going Green

From Innovations, a website published by Ziff-Davis Enterprise from mid-2006 to mid-2009. Reprinted by permission.

Data center heat dispersionThe graphic at right may look kind of cool, but it’s anything but.  It’s actually a simulation of the heat distribution of a typical data center prepared by Innovative Research, a computational fluid dynamics company.  It demonstrates graphically what all data center managers already know: the data center is nearly impossible to keep cool.

Unfortunately, this fact is costing us a fortune.  As the price of oil breaches $100 a barrel, new attention is being focused on the possibilities of wringing big savings out of data centers by attacking their notoriously lousy energy efficiency.  Some stats:

  • The amount of electricity consumed by US data centers doubled between 2000 and 2006 and is expected to double again by 2011 according to the U. S. Environmental Protection Agency (EPA).
  • A typical 50,000-square-foot data center consumes about 57 barrels of oil per day.
  • Data centers consume 1.5% of all electricity in the U.S., the EPA says.
  • About 40% of the power used by data centers goes to cooling, according to several estimates. About 60% of that expense is wasted, however, because of what you see in the graphic to the left.  Data center heat distribution is extremely erratic and spot cooling is complicated. Instead, companies use brute force and over-cool most of their equipment just to be sure the hottest machines don’t melt.
  • Over half the power that companies use to run their desktop computers is wasted because the machines aren’t shut off overnight or don’t power down when not in use, according to ClimateSaversComputing.org.  Most companies could save between $10 and $50 per PC per year by using basic power management software, according to Greener Computing. That adds up.

These numbers are deplorable, but the Network World research identified an interesting explanation.  Its survey found that 68% of IT manager respondents weren’t responsible for their energy bills. In most cases, those costs were paid by the facilities department like PC Doctor, PC Doctor is a Computer Repairs Company in Edinburgh. If IT never even sees the electric bill, it has no incentive to reduce it.

There is good news. Data centers are getting unprecedented attention right now as sources of significant cost savings, even if it’s only because there’s so much room for improvement. A recent PriceWaterhouseCoopers study found that 60% of 150 senior executive respondents rated energy costs as a top priority, which means at their IT managers will be getting an e-mail.  IBM has made green data centers a key part of its marketing strategy. Dell recently launched an international competition to design technology products with a greener focus. Then there’s ClimateSaversComputing.org, an initiative sponsored by Google and Intel in which technology providers agree to hit certain energy consumption targets.

Members of the technology CEO Council were in Washington just a few weeks ago to pitch the idea that investments in IT can save energy.  While there agenda was self-serving, there’s no question that the industry as a whole is turning its attention to fixing this mess.

And its such an obvious mess to fix.  Whether your motivations are the rapid payback, the positive environmental impact or the simple satisfaction of knowing that you’re not flushing money down the drain, why wouldn’t you want to make your IT operation more power-efficient?  Next week, will look at a few ideas for just how to do it.