Technical Drivers for Cloud Computing

In a previous post I’ve described the business drivers for Cloud Computing infrastructures (http://www.definethecloud.net/?p=27.)  Basically the idea of transforming data center from a cost center into a profit center.  In this post I’ll look at the underlying technical challenges that cloud looks to address in order to reduce data center cost and increase data center flexibility.

There are several infrastructure challenges faced by most data centers globally: Power, Cooling, Space and Cabling,  In addition to these challenges data centers are constantly driven to adapt more rapidly and do more with less.  Let’s take a look at the details of these challenges.

Power:

Power is a major data center consideration.  As data centers have grown and hardware has increased in capacity power requirements have exponentially scaled.  This large power usage causes concerns of both cost and of environmental impact.  Many power companies provide incentives for power reduction due to the limits on the power they can produce.  Additionally many governments provide either incentives for power reduction or mandates for the reduction of usage typically in the form of ‘green initiatives.

Power issues within the data center come in two major forms: total power usage, and usage per square meter/foot.  Any given data center can experience either or both of these issues.  Solving one without addressing the other may lead to new problems.

Power problems within the data center as a whole come from a variety of issues such as equipment utilization and how effectively purchased power is used.  A common metric for identifying the latter is Power Usage Effectiveness (PUE.)  PUE is a measure of how much power drawn from the utility company is actually available for the computing infrastructure.  PUE is usually expressed as a Kilowatt ratio X:Y where X is power draw and Y is power that reaches computing equipment such as switches, servers and storage.  The rest is lost to such things as power distribution, battery backup and cooling.  Typically PUE numbers for data centers average 2.5:1 meaning 1.5 KW is lost for every 1 KW delivered to the compute infrastructure.  Moving to state-of-the-art designs has brought a few data centers to 1.2:1 or lower.

Power per square meter/foot is another major concern and increases in importance as compute density increases.  More powerful servers, switches, and storage require more power to run.  Many data centers were not designed to support modern high density hardware such as blades and therefore cannot support full density implementations of this type of equipment.  It’s not uncommon to find data centers with either near empty racks housing a single blade chassis or increased empty floor space in order to support sparsely set fully populated racks.  The same can be said for cooling.

Cooling:

Data center cooling issues are closely tied to the issues with power.  Every watt of power used in the data center must also be cooled, the coolers themselves in turn draw more power.  Cooling also follows the same two general areas of consideration: cooling as a whole and cooling per square meter/foot.

One of the most common traditional data center cooling methods uses forced-air cooling provided under raised floors.  This air is pushed up through the raised floor in ‘cold-aisles’ with the intake side of equipment facing in.  The equipment draws the air through cooling internal components and exhausts into ‘hot-aisles’ which are then vented back into the system.  As data center capacity has grown and equipment density has increased traditional cooling methods have been pushed to or past their limits.

Many solutions exist to increase cooling capacity and or reduce cooling cost.  Specialized rack and aisle enclosures prevent hot/cold air mixing, hot spot fans alleviate trouble points, and ambient outside air can be used for cooling in some geographic locations.  Liquid cooling is another promising method of increasing cooling capacity and/or reducing costs.  Many liquids have a higher capacity for storing heat than air, allowing them to more efficiently pull heat away from equipment.  Liquid cooling systems for high-end devices have existed for years, but more and more solutions are being targeted at a broader market.  Solutions such as horizontal liquid racks allow off-the-shelf-traditional servers to be fully immersed in mineral oil based solutions that have a high capacity for transferring heat and are less conductive than dry wood.

Beyond investing in liquid cooling solutions or moving the data center to Northern Washington there are  tools that can be used to reduce data center cooling requirements.  One method that works effectively is that equipment can be run at higher temperatures to reduce cooling cost with acceptable increases in mean-time-to-failure for components.  The most effective solution for reducing cooling is reducing infrastructure.  The ‘greenest’ equipment is the equipment you don’t ever bring into the data center, less power drawn equates directly to less cooling required.

Space:

Space is a very interesting issue because it’s all about who you are and more importantly, where you are.  For instance many companies started their data centers in locations like London, Tokyo and New York because that’s where they were based.  Those data centers pay an extreme premium for the space they occupy.  Using New York as an example many of those companies could save hundreds of dollars per month moving the data center across the Hudson with little to no loss in performance.

That being said many data centers require high dollar space because of location.  As an example ‘Market data’ is all about latency (time to receive or transmit data) every micro-second counts.  These data centers must be in financial hubs such as London and New York.  Other data centers may pay less per square meter/foot but could reduce costs by reducing space.  In either event reducing space reduces overhead/cost.

Cabling:

Cabling is often a pain point understood by administrators but forgotten by management.  Cabling nightmares have become an accepted norm of rapid change in a data center environment.  The reason cabling has such a potential for neglect is that it’s been an unmanageable and or not understood problem.  Engineers tend to forget that a ‘rat’s nest’ of cables behind the servers/switches or under the floor tiles hinder cooling efficiency.  To understand this think of the back of the last real-world server rack you saw and the cables connecting those servers.  Take that thought one step further and think about the cables under the floor blocking what may be primary cold air flow.

When thinking about cabling it’s important to remember the key points: Each cable has a purchase cost, each cable has a power cost, and each cable has a cooling cost.  Regardless of complex metrics to quantify those three on a total basis it’s easy to see that reducing cables reduce cost.

Taking all four of those factors in mind and producing a solution that provides benefits for each is the goal of cloud computing.  If you solve one problem by itself you will most likely increase another.  Cloud computing is a tool to reduce infrastructure and cabling within a Small-to-Medium-Business (SMB) all the way up to a global enterprise.  At the same time cloud-infrastructures support faster adoption times for business applications.  Say that how you will, but ‘cloud’ has the potential to reduce cost while increasing ‘mean-time-to-market’ ‘business-agility’ ‘data-center flexibility’ or any other term you’d like to apply.  Cloud is simply the concept of rethinking the way we do IT today in order to meet the challenges of the way we do business today.  If right now you’re asking ‘why aren’t we/they all doing it’ then stay tuned for my next post on the challenges of adopting cloud architectures.

GD Star Rating
loading...
Technical Drivers for Cloud Computing, 5.0 out of 5 based on 3 ratings

Comments

  1. Thanks for the new post Joe. I wanted to add some informative details on the cooling part and based on my experience. The traditional forced air cooling approach will typically give between 3KW and 5KW of cooling capacity per rack. That is when you would see a single blade chassis (maybe not fully populated) in a single rack. This can be optimized of course, using techniques like rack side coolers or certain containment solutions. Nevertheless, and at best, hot aisle containment solutions can push the KW cooling value to about 10-12KW per rack, which may solve some issues with density but the problem may persist especially when we have high density computing platforms installed.

    Cloud computing is evolving the industry for better optimization of hardware resources, but because we are optimizing, we are also increasing the computing density per rack which means higher cooling needs, and if the cooling system cannot handle the density our cloud system is optimizing to, then we would be forced to operate at higher temperature(which is not acceptable for mission critical environments).

    I have heard of an additional solution that is under the “precision cooling” category. The input to the data center would be from water chillers, this chilled water would be pumped directly to the racks that have their own compressor and heat exchanger (in a redundant configuration). As a result, air is circulated only within the rack and cooling densities of up 20-22KW per rack can be achieved.

    Hope you found this information interesting

    GD Star Rating
    loading...
  2. One small thing to add. If you want to do a quick calculation on how much cooling capacity you need and want to do a very quick estimated calculation, you can assume that power consumed by hardware = power required from cooling. So if you have a Core switch consuming 4KW of power, you would not need more than 4KW of cooling. Nevertheless, we are technical people, and shouldn’t be estimating things in that way! So to get the exact value, you need to look up the heat dissipation value of all active components installed in a rack. That is usually measured in BTUs (British Thermal Units). Typically you would need a cooling capacity of 1 kW for every 3412.1416 BTU/hour.

    GD Star Rating
    loading...
  3. okay, the heat is melting my brain. Don’t know why I cellad you Carrie. Gah! Maybe it’s because I am having Carrie like vision of splattering my husband with pig’s blood (nooo, not really) if he doesn’t give in! Grrr ..-= Scout’s Honor s last blog .. =-.

    GD Star Rating
    loading...
  4. If you desire to grow your familiarity simply keep visiting
    this web site and be updated with the newest news update posted here.

    GD Star Rating
    loading...

Speak Your Mind

*