The Genome Center (where I work) is building a new data center. With the avalanche of data that the next-generation sequencers, we need a significant expansion in computing and storage infrastructure. Here is a picture from our current lab across the street. Yes, it does come with its own red racing stripe. Several people have commented that it is not the prettiest building. All I can say is that I had nothing to do with the exterior design. So what does that picture show? The front part of the building that is only one story is the actual data center. Actually, most of it is the data center. It also includes a loading dock and some storage along the near (west) wall and a corridor along the left (north) wall. There are also mechanical rooms (housing large air handlers) on the east and west side of the data center. The two-story portion of the building has a small office on the first floor. The remainder of the first floor and the entire second floor house the electrical and mechanical supporting equipment. That is the dirty little secret of high-density computing and storage systems: while computer equipment has shrunk significantly, the electrical and mechanical systems needed to support them just keep getting bigger. So in our 16,000 square foot building, we have about 3,100 square feet of actual data center space. Ouch.

Now there are some ways to reduce the size of the supporting equipment. On the electrical side, you can go with fly-wheel UPS systems instead of battery UPS systems. The plus side is that fly-wheel systems are a little smaller and require less maintenance. The down side is that fly-wheel systems only supply energy for 20-30 seconds (so you better have a good generator) and they have higher purchase prices (although lifetime cost may be lower due to the higher maintenance cost on batteries). Unfortunately, the generators, transformers, power distribution units, etc. are just huge.

On the mechanical (cooling) side, there is actually an alternative cooling strategy we investigated that could result in significant space savings. The design that was chosen is basically an amped up large building cooling system, i.e., chilled water. So you basically have large air handling units, chillers, cooling towers, tanks, and lots of pumps, pipes, and valves. This was chosen because this is what the designers, construction managers, and facilities teams were familiar with. An alternative design uses a system with compressor-based computer-room air conditioners (CRAC or Liebert units) coupled with refrigerant-based point-cooling attached to racks (blades generate a lot of heat). The CRAC units have condensers in them and require heat dissipation on the roof. The point-cooling systems require a condensing system outside the data center and all the units require heat dissipation units on the roof of the building. So while you need some stuff outside the data center, a lot of it is on the roof and it is much smaller than the very large chilled water plant (chillers, pumps, cooling towers, air handlers, and tanks). On the other hand, you end up with a lot more systems to maintain (instead of three chillers, you have 20 or so condensers). There are some pluses to having lots of systems. When you have lots of systems, N+1 redundancy costs a lot less than when N is small. You also have lower initial costs because the cooling system is only as large as it needs to be initially, i.e., you don't have a bunch of extra capacity on day one when only a quarter of your data center is filled. You also don't have to buy a big, complex, expensive control system, since the CRAC units have controllers built in (and are able to be networked to work together).

If you want, you can see more pictures of the data center construction.