This time of year for me brings the whole aspect of power to the fore, not a desire to run the country but electricity. It was this time last year that a cable failed locally and we lost electricity for 40 hours. We do have a gas fire but no other cooking capabilities and the challenges of seeing teenagers without games consoles or internet access was a revelation. We all ended up doing a jigsaw by candle light which brought back memories of childhood in the early 70’s and the associated power cuts.
This year the issue is the cost of power I have personally seen a huge increase in my electricity charges and so have been busily running round installing low energy bulbs and the latest thing which is high power LEDs. I have managed to reduce the power draw in one room from 450W to less than 50 and the light is quite pleasant too.
Saving money through using less power is viewed as reason to make changes in the data centre. These savings can range from the simple to the extreme, while moving your data centre to Iceland, where power is effectively free and cooling can be achieved from the outside air, could be at the extreme end, most look to intrinsic savings from servers and other resources.
Some simple maths start to apply themselves and while not always accurate the theory stands. If by a combination of new servers and virtualization we can take a row of servers and collapse them into 1 or 2 racks we save on the power of each server, the cooling required for each server, the power draw of the network equipment and cooling and the same for local storage. I have spoken before in previous articles about the challenges that virtualization places on the network so I won’t revisit that area now but the network does have a part to play.
While the obvious benefit of virtualization is being able to consolidate services the other side to it is live migration. This allows us to maximise the utilization of each server by being able to move applications from one server to another so in quiet times we could move an application from a less used server which could give the option to turn it off. Ideally we could do that with a whole rack or row which could mean that we could turn off the cooling and lights in that area also. The objection to this is for applications that need a dedicated server but the current view appears to be that sitting that application on a hypervisor could provide live migration without having to share the server.
Surely this has nothing to do with networking or security I hear you shout but au contraire if you want total flexibility in live migration you need everything to be location independent. This takes us back to bubbles and shadows, an application needs to be close to any resources it may need, databases, web servers, storage, etc. If it is within 1 switch hop of these things it is in a bubble of happiness. In a tree structure this means that in general an application is happiest in the same rack as other resources and as soon as you start moving things around it will create delay and a less optimized environment.
From a security angle in general security devices live at the end of a row or maybe even further up the tree and cast a shadow across everything that is downstream. There is a danger that when you move an application from 1 shadow to another the security state information is lost.
In theory moving to a Fabric based architecture should solve these problems but as we know not all Fabrics are born equal and not all provide the flexibility of Juniper Networks QFabric. The security issue is a tougher one to solve as we need to start to think in a different way. We must change our thoughts from the physical to the virtual even though much of the enforcement will happen in a physical device that device must act as a virtualised security point. With a Fabric we can build a flat network which allows us to cast a single security shadow across the infrastructure. However we need to be able to extend that into the hypervisor and build a single set of policies that make sure when a VM is moved or created the correct security policy is applied.
Until we can be confident that live migration is easily supported by the Fabric and does not create security policy issues then the benefits will not be truly realised. I would be really interested to know whether organisations are looking at live migration as a method to save power by turning off sections of the data center.
Power, if not already, will become a focus of cost and social responsibility, I was amazed by some numbers that come from the latest DCD UK census report which is well worth reading. The power consumption of UK data centers is 6.44 GW which is enough to power 6 million homes which is equal to 3.1% of the total power generated.
Many calculations of power usage are based on an average of about 4KW per rack but some are saying that 17KW per rack is OK with normal hot / cold aisle cooling. If so this could mean the savings could be even greater. What are your thoughts on this subject.