Why Cloud is as ‘Green’ As It Gets

I stumbled across a document from Greenpeace citing cloud for additional power draws and the need for more renewable energy (http://www.greenpeace.org/international/en/publications/reports/make-it-green-cloud-computing/.)  This is one of a series I’ve been noticing from the organization bastardizing IT for its effect on the environment and chastising companies for new data centers.  These articles all strike a cord with me because they show a complete lack of understanding of what cloud is, does and will do on the whole especially where it concerns energy consumption and ‘green’ computing.

Greenpeace seams to be looking at cloud as additional hardware and data centers being built to serve more and more data.  While cloud is driving new equipment, new data centers and larger computing infrastructures it is doing so to consolidate computing overall.  Speaking of public cloud specifically there is nothing more green than moving to a fully cloud infrastructure.  It’s not about a company adding new services it’s about moving those services from underutilized internal systems onto highly optimized and utilized shared public infrastructure.

Another point they seem to be missing is the speed at which technology moves.  A state of the art data center built 5-6 years ago would be lucky to reach 1.5:1 Power Usage Effectiveness (PUE) whereas today’s state-of-the-art data centers can get to 1.2:1 or below.  This means that a new data center can potentially waste .3 or more KW less per processing KW than one built 5-6 years ago.  Whether that’s renewable energy or not is irrelevant, it’s a good thing.

The most efficient privately owned data centers moving forward will be ones built as private-cloud infrastructures that can utilize resources on demand, scale-up/scale-down instantly and automatically shift workloads during non-peak times to power off unneeded equipment.  Even the best of these won’t come close to the potential efficiency of public cloud offerings which can leverage the same advantages and gain exponential benefits by spreading them across hundreds of global customers maintaining high utilization rates around the clock and calendar year.

Greenpeace lashing out at cloud and focusing on pushes for renewable energy is naive and short sighted.  Several other factors go into thinking green with data center.  Power/Cooling are definitely key, but what about utilization?  Turning a server off during off peak times is great to save power but that still means the components of the computer had to be mined, shipped, assembled, packaged, and delivered to me in order to sit powered off 1/3 of the day when I don’t need the cycles.  That hardware will still be refreshed the same way at which point some of the components may be recycled and the rest will be non-biodegradable and sometimes harmful waste. 

Large data centers housing public clouds have the promise of overall reduced power and cooling with maximum utilization.  You have to look at the whole picture to really go green.

Greenpeace: While you’re out there casting stones at big data centers how about you publish some of your numbers?  Let’s see the power, cooling, utilization numbers for your computing/data centers, actual numbers not what you offset by sending a check to Al Gore’s bank account.  While you’re at it throw in the costs and damage created by your print advertisement (paper, ink, power) etc.  Give us a chance to see how green you are.

GD Star Rating

Technical Drivers for Cloud Computing

In a previous post I’ve described the business drivers for Cloud Computing infrastructures (http://www.definethecloud.net/?p=27.)  Basically the idea of transforming data center from a cost center into a profit center.  In this post I’ll look at the underlying technical challenges that cloud looks to address in order to reduce data center cost and increase data center flexibility.

There are several infrastructure challenges faced by most data centers globally: Power, Cooling, Space and Cabling,  In addition to these challenges data centers are constantly driven to adapt more rapidly and do more with less.  Let’s take a look at the details of these challenges.


Power is a major data center consideration.  As data centers have grown and hardware has increased in capacity power requirements have exponentially scaled.  This large power usage causes concerns of both cost and of environmental impact.  Many power companies provide incentives for power reduction due to the limits on the power they can produce.  Additionally many governments provide either incentives for power reduction or mandates for the reduction of usage typically in the form of ‘green initiatives.

Power issues within the data center come in two major forms: total power usage, and usage per square meter/foot.  Any given data center can experience either or both of these issues.  Solving one without addressing the other may lead to new problems.

Power problems within the data center as a whole come from a variety of issues such as equipment utilization and how effectively purchased power is used.  A common metric for identifying the latter is Power Usage Effectiveness (PUE.)  PUE is a measure of how much power drawn from the utility company is actually available for the computing infrastructure.  PUE is usually expressed as a Kilowatt ratio X:Y where X is power draw and Y is power that reaches computing equipment such as switches, servers and storage.  The rest is lost to such things as power distribution, battery backup and cooling.  Typically PUE numbers for data centers average 2.5:1 meaning 1.5 KW is lost for every 1 KW delivered to the compute infrastructure.  Moving to state-of-the-art designs has brought a few data centers to 1.2:1 or lower.

Power per square meter/foot is another major concern and increases in importance as compute density increases.  More powerful servers, switches, and storage require more power to run.  Many data centers were not designed to support modern high density hardware such as blades and therefore cannot support full density implementations of this type of equipment.  It’s not uncommon to find data centers with either near empty racks housing a single blade chassis or increased empty floor space in order to support sparsely set fully populated racks.  The same can be said for cooling.


Data center cooling issues are closely tied to the issues with power.  Every watt of power used in the data center must also be cooled, the coolers themselves in turn draw more power.  Cooling also follows the same two general areas of consideration: cooling as a whole and cooling per square meter/foot.

One of the most common traditional data center cooling methods uses forced-air cooling provided under raised floors.  This air is pushed up through the raised floor in ‘cold-aisles’ with the intake side of equipment facing in.  The equipment draws the air through cooling internal components and exhausts into ‘hot-aisles’ which are then vented back into the system.  As data center capacity has grown and equipment density has increased traditional cooling methods have been pushed to or past their limits.

Many solutions exist to increase cooling capacity and or reduce cooling cost.  Specialized rack and aisle enclosures prevent hot/cold air mixing, hot spot fans alleviate trouble points, and ambient outside air can be used for cooling in some geographic locations.  Liquid cooling is another promising method of increasing cooling capacity and/or reducing costs.  Many liquids have a higher capacity for storing heat than air, allowing them to more efficiently pull heat away from equipment.  Liquid cooling systems for high-end devices have existed for years, but more and more solutions are being targeted at a broader market.  Solutions such as horizontal liquid racks allow off-the-shelf-traditional servers to be fully immersed in mineral oil based solutions that have a high capacity for transferring heat and are less conductive than dry wood.

Beyond investing in liquid cooling solutions or moving the data center to Northern Washington there are  tools that can be used to reduce data center cooling requirements.  One method that works effectively is that equipment can be run at higher temperatures to reduce cooling cost with acceptable increases in mean-time-to-failure for components.  The most effective solution for reducing cooling is reducing infrastructure.  The ‘greenest’ equipment is the equipment you don’t ever bring into the data center, less power drawn equates directly to less cooling required.


Space is a very interesting issue because it’s all about who you are and more importantly, where you are.  For instance many companies started their data centers in locations like London, Tokyo and New York because that’s where they were based.  Those data centers pay an extreme premium for the space they occupy.  Using New York as an example many of those companies could save hundreds of dollars per month moving the data center across the Hudson with little to no loss in performance.

That being said many data centers require high dollar space because of location.  As an example ‘Market data’ is all about latency (time to receive or transmit data) every micro-second counts.  These data centers must be in financial hubs such as London and New York.  Other data centers may pay less per square meter/foot but could reduce costs by reducing space.  In either event reducing space reduces overhead/cost.


Cabling is often a pain point understood by administrators but forgotten by management.  Cabling nightmares have become an accepted norm of rapid change in a data center environment.  The reason cabling has such a potential for neglect is that it’s been an unmanageable and or not understood problem.  Engineers tend to forget that a ‘rat’s nest’ of cables behind the servers/switches or under the floor tiles hinder cooling efficiency.  To understand this think of the back of the last real-world server rack you saw and the cables connecting those servers.  Take that thought one step further and think about the cables under the floor blocking what may be primary cold air flow.

When thinking about cabling it’s important to remember the key points: Each cable has a purchase cost, each cable has a power cost, and each cable has a cooling cost.  Regardless of complex metrics to quantify those three on a total basis it’s easy to see that reducing cables reduce cost.

Taking all four of those factors in mind and producing a solution that provides benefits for each is the goal of cloud computing.  If you solve one problem by itself you will most likely increase another.  Cloud computing is a tool to reduce infrastructure and cabling within a Small-to-Medium-Business (SMB) all the way up to a global enterprise.  At the same time cloud-infrastructures support faster adoption times for business applications.  Say that how you will, but ‘cloud’ has the potential to reduce cost while increasing ‘mean-time-to-market’ ‘business-agility’ ‘data-center flexibility’ or any other term you’d like to apply.  Cloud is simply the concept of rethinking the way we do IT today in order to meet the challenges of the way we do business today.  If right now you’re asking ‘why aren’t we/they all doing it’ then stay tuned for my next post on the challenges of adopting cloud architectures.

GD Star Rating