4 Generations of Microsoft Datacenters

4 Generations of Microsoft Datacenters

I came accross this and found it interesting as I am not a Datacenter person.

When you look at Microsoft's sustainability evolution, we must look at the generations of data centers that Microsoft has built over the last 20 years.

Along the way, we’ve tried different approaches to do this and we’ve operated data centers for quite some time now. We’ve tried a very different way of getting that kind of power usage effectiveness and getting that cost down in these sorts of environments. We’ve passed generation 1, which was all about co-located data centers—sticking your servers and racks in a room—and moved on to generation 2 data centers. The difference with generation 2 data centers is that we started to install them racks at a time, no longer needing to put in particular ones and get the scissor lift out to put in the rack. You were actually trucking in and wheeling in the entire rack, mounting that in, plugging it all up, and getting it running. When you start designing around racks instead of around servers in racks, there’s a whole different approach you can take to hot isles, cold isles, how you control the air flow through the data center itself, how you can maximize the efficiency you can get out of that.

Then in about 2008, we moved to a containment model. The idea was rather than shipping racks, why don’t we ship multiple racks or pre-configured racks. We would buy them from our hardware vendors, already installed in an ISO shipping container. They would just come out of the back of the truck and get lowered into place, and then away they go with 3000 or 4000 servers configured inside of it. That was great in terms of scalability; we could really turn it on a lot faster, but the thermal properties of an ISO shipping container aren’t really conducive to your hardware. That takes us to our fourth model here, and this is really the modular style data center. These are the latest things we’ve started to deploy. We call these things IT packs, or Information Technology pre-assembled components, and essentially in this model we’re trying to reduce as much assets and moving parts as possible. 
 
In generation 1 it was really all about collocation and bringing the disparate resources that we had distributed throughout the network into a single location. Many of our customers are still dealing with these types of facilities, and the problem that they're engaged with is they're running out of power and space.

In generation 2 we made a conscience decision to design, build, and operate our own data centers, and the reason being is that we were seeing bleed-in or growth in the cost baselines in the generation 1 configuration. We built those facilities really about density, which was the ability to roll a fully populated rack on the back of the data center platform, and move it into place, power it up, and have it online.

In generation 3 it was really about containment, not necessarily about containers, although containers ended up being the end item SKU we chose in one location. The other approach we took is to basically put servers in pods, and to distribute airflow through those pods to achieve the efficiency we have in the generation 3 configuration.

In generation 4 now it's all about modularity. So, the approach will be to drop an engineering spine in, both network and power, to basically plug & play all the components required to run in that operation.

The IT Pack is made up of four components: the IT load, an evaporated cooling unit, an air handling unit, and a mixing unit. The evaporative cooling unit has a mesh screen where water can slowly drip through to keep a consistent amount of humidity. Air will naturally blow through the evaporated cooling unit. The IT load will suck that air through, and this air handling unit just makes sure that there’s a pressure difference between the outside pressure and inside pressure of the data center. Air gets naturally pulled through those evaporative cooling units, without having to be powered by fans or anything like that, cooling the servers. The air handling unit pulls air out and pushes some of it back to the mixing. That‘s basically how we control what the temperature is.

The beauty of this model is that you don’t even have to worry about air conditioning any more. This server lives outside, the data center lives outside. The Quincy data center in Eastern Washington has a climate similar to Madrid, a little more water. But these things can live outside. We built a roof over the top so we can prevent the worst.

It takes about four hours to install, and it has only three connections coming out of it: the ping, the pipe, and the power. Essentially, it’s garden variety electricity, a water hose, and a network connection. It’s all remotely monitored, all operating from this unit. We’re extremely energy efficient in how we do these things, and this is installing about 2500 servers at a time in these building blocks.

Copyright Microsoft

Elias Ramos Guadalupe

Advocating Canary Islands as the leading European Union Hub for remote workers in the Atlantic. Owner at Draco.

8 年

An spectacular & very fast progress in DC generations. And each one significantly more efficient than the previous one!

Pablo Junco Boquer

Driving Business Impact with Analytics & a Responsible use of AI

8 年

Nice! As you said, Microsoft already have Generation 4 datacenters and now the peak of Power Utilization Effectiveness (PUE) is improving a lot to aprox 1,12 for the containers and air-side economizer. However, today Microsoft is already building their Generation 5 datacenters, that are Software-defined facilities which will integrate the full stack: designing the server, datacenter, network, applications, and the work flow processes together.

要查看或添加评论,请登录

- Michiel van Vliet -的更多文章

社区洞察

其他会员也浏览了