Select the directory option from the above "Directory" header!

Menu
8 radical ways to cut datacentre power costs

8 radical ways to cut datacentre power costs

One or more of these wild-eyed approaches could save you a lot of money -- and not cost you much

Today's data center managers are struggling to juggle the business demands of a more competitive marketplace with budget limitations imposed by a soft economy. They seek ways to reduce opex (operating expenses), and one of the fastest growing -- and often biggest -- data center operation expenses is power, consumed largely by servers and coolers.

Alas, some of the most effective energy-saving techniques require considerable upfront investment, with paybacks measured in years. But some oft-overlooked techniques cost next to nothing -- they're bypassed because they seem impractical or too radical. The eight power savings approaches here have all been tried and tested in actual data center environments, with demonstrated effectiveness. Some you can put to work immediately with little investment; others may require capital expenditures but offer faster payback than traditional IT capex (capital expenses) ROI.

[ Unlearn the untrue and outdated data center practices in Logan G. Harbough's "10 power-saving myths debunked." | Use server virtualization to get highly reliable failover at a fraction of the usual cost. Find out how in InfoWorld's High Availability Virtualization Deep Dive PDF special report. ]

The holy grail of data center energy efficiency metrics is the Power Usage Effectiveness (PUI) rating, in which lower numbers are better and 1.0 is an ideal objective. PUI compares total data center electrical consumption to the amount converted into useful computing tasks. A not-uncommon value of 2.0 means two watts coming into the data center falls to one watt by the time it reaches a server -- the loss is power turned into heat, which in turn requires power to get rid of via traditional data center cooling systems.

As with all simple metrics, you must take PUI for what it is: a measure of electrical efficiency. It doesn't consider other energy sources, such as ambient environmental, geothermal, or hydrogen fuel cells, many of which can be exploited to lower total power costs. The techniques that follow may or may not lower your measurable PUI, but you can evaluate their effectiveness more simply by checking your monthly utility bill. That's where it'll really matter anyhow.

You won't find solar, wind, or hydrogen power in the bag of tricks presented here. These alternative energy sources require considerable investment in advanced technologies, which delays cost savings too much for the current financial crisis. By contrast, none of the following eight techniques requires any technology more complex than fans, ducts, and tubing.

The eight methods are:

Radical energy savings method 1: Crank up the heat The simplest path to power savings is one you can implement this afternoon: Turn up the data center thermostat. Conventional wisdom calls for data center temperatures of 68 degrees Fahrenheit or below, the logic being that these temperatures extend equipment life and give you more time to react in the event of a cooling system failure.

Experience does show that server component failures, particularly for hard disks, do increase with higher operating temperatures. But in recent years, IT economics crossed an important threshold: Server operating costs now generally exceed acquisition costs. This may make hardware preservation a lower priority than cutting operating costs.

At last year's GreenNet conference, Google energy czar Bill Weihl cited Google's experience with raising data center temperatures, stating that 80 degrees Fahrenheit can be safely used as a new setpoint, provided a simple prerequisite is met in your data center: separating hot- and cold-air flows as much as possible, using curtains or solid barriers if needed.

Although 80 degrees Fahrenheit is a "safe" temperature upgrade, Microsoft's experience shows you could go higher. Its Dublin, Ireland, data center operates in "chiller-less" mode, using free outside-air cooling, with server inlet temperatures as high as 95 degrees Fahrenheit. But note there is a point of diminishing returns as you raise the temperature, owing to the higher server fan velocities needed that themselves increase power consumption.

Radical energy savings method 2: Power down servers that aren't in use Virtualization has revealed the energy saving advantages of spinning down unused processors, disks, and memory. So why not power off entire servers? Is the increased "business agility" of keeping servers ever ready worth the cost of the excess power they consume? If you can find instances where servers can be powered down, you can achieve the lowest power usage of all -- zero -- at least for those servers. But you'll have to counter the objections of naysayers first.

For one, it's commonly believed that power cycling lowers the servers' life expectancy, due to stress placed on non-field-swappable components such as motherboard capacitors. That turns out to be a myth: In reality, servers are constructed from the same components used in devices that routinely go through frequent power cyclings, such as automobiles and medical equipment. No evidence points to any decreased MTBF (mean time between failure) as a result of the kinds of power cycling servers would endure.

A second objection is that servers take too long to power up. However, you can often accelerate server startup by turning off unnecessary boot-time diagnostic checks, booting from already-operational snapshot images, and exploiting warm-start features available in some hardware.

A third complaint: Users won't wait if we have to power up a server to accommodate increased load, no matter how fast the things boot. However, most application architectures don't say no to new users so much as simply process requests more slowly, so users aren't aware that they're waiting for servers to spin up. Where applications do hit hard headcount limits, users have shown they're willing to hang in there as long as they're kept informed by a simple "we're starting up more servers to speed your request" message.

Radical energy savings method 3: Use "free" outside-air cooling. Higher data center temperatures help you more readily exploit the second power-saving technique, so-called free-air cooling that uses lower outside air temperatures as a cool-air source, bypassing expensive chillers, as Microsoft does in Ireland. If you're trying to maintain 80 degrees Fahrenheit and the outside air hits 70, you can get all the cooling you need by blowing that air into your data center.

The effort required to implement this is a bit more laborious than in method 1's expedient cranking up of the thermostat: You must reroute ducts to bring in outside air and install rudimentary safety measures -- such as air filters, moisture traps, fire dampers, and temperature sensors -- to ensure the great outdoors don't damage sensitive electronic gear.

In a controlled experiment, Intel realized a 74 percent reduction in power consumption using free-air cooling. Two trailers packed with servers, one cooled using traditional chillers and the other using a combination of chillers and outside air with large-particle filtering, were run for 10 months. The free-air trailer was able to use air cooling exclusively 91 percent of the time. Intel also discovered a significant layer of dust inside the free-air-cooled server, reinforcing the need for effective fine-particle filtration. You'll likely have to change filters frequently, so factor in the cost of cleanable, reusable filters.

Despite significant dust and wide changes in humidity, Intel found no increase in failure rate for the free-air cooled trailer. Extrapolated to a data center consuming 10 megawatts, this translates to nearly $3 million in annual cooling cost savings, along with 76 million fewer gallons of water, which is itself an expensive commodity in some regions.

Radical energy savings method 4: Use data center heat to warm office spaces You can double your energy savings by using data center BTUs to heat office spaces, which is the same thing as saying you'll use relatively cool office air to chill down the data center. In cold climes, you could conceivably get all the heat you need to keep people warm and manage any additional cooling requirements with pure outside air.

Unlike free-air cooling, you may never need your existing heating system again; by definition, when it's warm out you won't require a people-space furnace. And forget worries of chemical contamination from fumes emanating from server room electronics. Modern Restriction of Hazardous Substances (RoHS)-compliant servers have eliminated environmentally unfriendly contaminants -- such as cadmium, lead, mercury, and polybromides -- from their construction.

As with free-air cooling, the only tech you need to pull this off is good old HVAC know-how: fans, ducts, and thermostats. You'll likely find that your data center puts out more than enough therms to replace traditional heating systems. IBM's data center in Uitikon, Switzerland, was able to heat the town pool for free, saving energy equal to that for heating 80 homes. TelecityGroup Paris even uses server waste heat to warm year-around greenhouses for climate change research.

Reconfiguring your furnace system may entail more than a weekend project, but the costs are likely low enough that you can reap savings in a year or less.

Radical energy savings method 5: Use SSDs for highly active read-only data setsSSDs have been popular in netbooks, tablets, and laptops due to their speedy access times, low power consumption, and very low heat emissions. They're used in servers, too, but until recently their costs and reliability have been a barrier to adoption. Fortunately, SSDs have dropped in price considerably in the last two years, making them candidates for quick energy savings in the data center -- provided you use them for the right application. When employed correctly, SSDs can knock a fair chunk off the price of powering and cooling disk arrays, with 50 percent lower electrical consumption and near-zero heat output.

One problem SSDs haven't licked is the limited number of write operations, currently around 5 million writes for the single-level-cell (SLC) devices appropriate for server storage. Lower-cost consumer-grade multilevel-cell (MLC) components have higher capacities but one-tenth of SLCs' endurance.

The good news about SSDs is that you can buy plug-compatible drives that readily replace your existing power-hungry, heat-spewing spinners. For a quick power reduction, replace large primarily read-only data sets, such as streaming video archives, with SSD. You won't encounter SSD wear-out problems, and you'll gain an instant performance boost in addition to reduced power and cooling costs.

Go for drives specifically designed for server, rather than desktop, use. Such drives typically have multichannel architectures to increase throughput. The most common interface is SATA 2.0, with 3Gbps transfer speeds. Higher-end SAS devices, such as the Hitachi/Intel Ultrastar SSD line, can achieve 6Gbps speeds, with capacities up to 400GB. Although SSD devices have encountered some design flaws, these have been primarily with desktop and laptop drivers involving BIOS passwords and encryption, factors not involved in servers' storage devices.

Do plan to spend some brain cycles monitoring usage on your SSDs, at least initially. Intel and other SSD makers provide analysis tools that track read and write cycles, as well as write failure events. SSD disks automatically remap writes to even out wear across a device, a process called load leveling, which can also detect and recover from some errors. When actual significant write failures begin occurring, it's time to replace the drive.

Radical energy savings method 6: Use direct current in the data center Yes, direct current is back. This seemingly fickle energy source enjoys periodic resurgences as electrical technologies ebb and flow. The lure is a simple one: Servers use direct current internally, so feeding that power to them directly should reap savings by eliminating the AC-to-DC conversion performed by a server's internal power supply.

Direct current was popular in the early 2000s because the power supplies in servers of that era had data center conversion efficiencies as low as 75 percent. But then power supply efficiencies improved, and data centers shifted to also-more-efficient 208-volt AC. By 2007, direct current fell out of favor. InfoWorld even counted it among the myths in our 2008 article "10 power-saving myths debunked." Then in 2009 direct current bounced back, owing to the introduction of high-voltage data center products.

In the earliest data centers, utility-supplied 16,000 VAC (volts of alternating current) electricity was first converted to 440 VAC for routing within a building, then to 220 VAC, and finally to the 110 VAC used by the era's servers. Each conversion wasted power by dint of being less than 100 percent efficient, with the lost power being cast off as heat (which had to be removed by cooling, incurring yet more power expense). The switch to 208 VAC eliminated one conversion, and with in-server power supplies running at 95 percent efficiency, there wasn't any longer much to gain.

But 2009 brought a new line of data center equipment that could convert 13,000 VAC utility power directly to 575 VDC (volts of direct current), which can then be distributed directly to racks, where a final step-down converter takes it to 48 VDC for consumption by servers in the rack. Each conversion is about twice as efficient as older AC transformer technology and emits far less heat. Although vendors claim as much a 50 percent savings when electrical and cooling reductions are combined, most experts say that 25 percent is a more credible number.

This radical approach does require some expenditure on new technology, but the technologies involved are not complex and have been demonstrated to be reliable. One potential hidden cost is the heavier copper cabling required for 48 VDC distribution. As Joule's Law dictates, lower voltages require heavier conductors to carry the same power as higher voltages, due to higher amperage. Another cost factor with data centers is the higher voltage drop incurred over distance (about 20 percent per 100 feet), compared to AC. This is why the 48 VAC conversion is done in the rack rather than back at the utility power closet.

Of course, converting to direct current requires that your servers can accommodate 48 VDC power supplies. For some, converting to DC is a simple power supply swap. Chassis-based servers, such as blade servers, may be cheaper to convert because many servers share a single power supply. Google used the low-tech expedient of replacing server power supplies with 12V batteries, claiming 99 percent efficiency over a traditional AC-powered UPS (uninterruptible power supply) infrastructure.

If you're planning a server upgrade, you might want to consider larger systems that can be powered directly from 575 VDC, such as IBM's Power 750, which recently demolished human competitors as Watson on the "Jeopardy" game show. Brand-new construction enjoys the advantage of starting with a clean sheet of paper, as Syracuse University did when building out a data center last year, powering IBM Z and Power mainframes with 575 VDC.

Radical energy savings method 7: Bury heat in the earth In warmer regions, free cooling may not be practical all year long. Iowa, for example, has moderate winters but blistering summers, with air temperatures in the 90- and 100-degree range, which is unsuitable for air-side economization.

But the ground often has steady, relatively low temperatures, once you dig down a few feet. The subsurface earth is also less affected by outdoor weather conditions such as rain or heat that can overload traditional equipment. By sending pipes into the earth, hot water carrying server-generated heat can be circulated to depths where the surrounding ground will usher the heat away by conduction.

Again, the technology is not rocket science, but geothermal cooling does require a fair amount of pipe. A successful geothermal installation also requires careful advance analysis. Because a data center generates heat continuously, pumping that heat into a single earth sink could lead to local saturation and a loss of cooling. An analysis of ground capabilities near the data center will determine how much a given area can absorb, whether heat-transfer assistance from underground aquifers will improve heat dissipation, and what, if any, environmental impacts might ensue.

Speaking of Iowa, the ACT college testing nonprofit deployed a geothermal heat sink for its Iowa City data center. Another Midwestern company, Prairie Bunkers near Hastings, Neb., is pursuing geothermal cooling for its Data Center Park facility, converting several 5,000-square-foot ammo bunkers into self-contained data centers.

Radical energy savings method 8: Move heat to the sea via pipes Unlike geothermal heat sinks, the ocean is effectively an infinite heat sink for data center purposes. The trick is being near one, but that is more likely than you might think: Any sufficiently large body of water, such as the Great Lakes between the United States and Canada, can serve as a coolant reservoir.

The ultimate seawater cooling scenario is a data center island, which could use the ocean in the immediate area to cool the data center using sea-to-freshwater heat exchangers. The idea is so good that Google patented it back in 2007. Google's approach falls far afield of the objectives in this article, however, since the first step is to either acquire or construct an island.

But the idea isn't so farfetched if you're already located reasonably close to an ocean shore, large lake, or inland waterway. Nuclear plants have used sea and lake water cooling for decades. As reported in Computer Sweden (Google's English translation) last fall, Google took this approach for its Hamina, Finland, data center, a converted paper pulp mill. Using chilly Baltic Sea water as the sole means to cool its new mega data center, as well as to supply water for emergency fire protection, demonstrates a high degree of trust in the reliability of the approach. The pulp mill has an existing water inlet from the Baltic, with two-foot-diameter piping, reducing the project's implementation costs.

Freshwater lakes have been used successfully to cool data centers. Cornell University's Ithaca, N.Y., campus uses water from nearby 2.5-trillion-gallon Cayuga Lake to cool not just its data centers but the entire campus. The first-of-its-kind cooling facility, called Lake Source Cooling and built in 2000, pumps 35,000 gallons per hour, distributing water at 39 degrees Fahrenheit to campus buildings located 2.5 miles away.

Both salt- and freshwater cooling systems require one somewhat expensive component: a heat exchanger to isolate natural water from the water used to directly chill the data center. This isolation is necessary to protect both the environment and sensitive server gear, should a leak occur in the system. Beyond this one expensive component, however, sea (and lake) water cooling requires nothing more complex than ordinary water pipe.

How much money do you want to save? The value of these techniques is that none are mutually exclusive: You can mix and match cost saving measures to meet your short-term budget and long-term objectives. You can start with the simple expedient of raising the data center temperatures, then assess the value of other techniques in light of the savings you achieve with that first step.

This story, "8 radical ways to cut data center power costs," was originally published at InfoWorld.com. Follow the latest developments in data center technology and management at InfoWorld.com. For the latest developments in business technology news, follow InfoWorld.com on Twitter.

Read more about data center in InfoWorld's Data Center Channel.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags NetworkingserversstorageenvironmentData Centergreen ITcomputer hardwarePower Managementhardware systemssolid state drivesGreen data centerConfiguration / maintenanceData Center Design

Show Comments